1137
712
|
I'm trying to find a way to scan my entire Linux system for all files containing a specific string of text. Just to clarify, I'm looking for text within the file, not in the file name.
When I was looking up how to do this, I came across this solution twice:
However, it doesn't work. It seems to display every single file in the system.
Is this close to the proper way to do it? If not, how should I? This ability to find text strings in files would be extraordinary useful for me for some programming projects I'm doing.
| |||
1965
|
Do the following:
-r or -R is recursive, -n is line number and -w stands match the whole word. -l (lower-case L) can be added to just give the file name of matching files.
Along with these,
--exclude or --include parameter could be used for efficient searching. Something like below:
This will only search through the files which have .c or .h extensions. Similarly a sample use of
--exclude :
Above will exclude searching all the files ending with .o extension. Just like exclude file it's possible to exclude/include directories through
--exclude-dir and --include-dir parameter; for example, the following shows how to integrate --exclude-dir :
This works very well for me, to achieve almost the same purpose like yours.
For more options :
|
.
as a single-character wildcard, among others. My advice is to alway use either fgrep or egrep. – Walter Tross Oct 28 '13 at 11:54-H
with-l
(and maybegrep
withfgrep
). To exclude files with certain patterns of names you would usefind
in a more advanced way. It's worthwile to learn to usefind
, though. Justman find
. – Walter Tross Oct 28 '13 at 12:01/
in your command with a directory of your choice, quite often.
– Walter Tross Oct 29 '13 at 13:32find … -exec <cmd> +
is easier to type and faster thanfind … -exec <cmd> \;
. It works only if<cmd>
accepts any number of file name arguments. The saving in execution time is especially big if<cmd>
is slow to start like Python or Ruby scripts. – hagello Jan 28 at 5:16