The document discusses various shell scripting tools including grep, sed, and awk. It provides descriptions and examples of how to use grep to search for patterns in files using regular expressions. It also covers sed for stream editing to find and replace text. Finally, it discusses awk for text processing and programming with actions based on patterns in fields and records.
The document outlines a course on using 'sed' and 'awk', two powerful Unix utilities for text processing and report generation. It covers the conceptual understanding, syntax, and practical applications of both tools, including instructions on how to invoke them and utilize Unix regular expressions. Key topics include differentiating between 'sed' and 'awk', their respective programming models, and a comprehensive look at common commands and their usage.
The document provides an overview of the grep command, a text manipulation utility used for finding patterns in files or text. It details the syntax, options, exit statuses, and the use of regular expressions with examples. Additionally, the document covers advanced features, such as special characters and using grep with shell pipes.
This document discusses text editors and regular expressions. It provides information on common text editors like nano, emacs, and vi and their features. It also covers the basics of regular expressions, including special characters, character classes, quantifiers, and examples of how to use grep, sed, and awk with regular expressions.
The document covers regular expressions (regex) in the context of Linux, explaining their role in text pattern matching and their applications within various commands like 'grep' and 'sed'. It distinguishes between literal characters and meta-characters, outlines different types of regex, and provides examples of how to construct and use regex for complex text searches. Additionally, it introduces tools like 'awk', which utilizes regex for data processing and manipulation.
101 3.7 search text files using regular expressionsAcácio Oliveira
The document provides information about using regular expressions and common Linux utilities like grep and sed to search text files. It covers the basic syntax and options for grep and sed, examples of how to use them to search and make substitutions in files, and an overview of common regular expression metacharacters. The key topics discussed are using grep to search for patterns in files, the s command and flags in sed for substitutions, and metacharacters in regular expressions like *, ., ^, $, [], etc. and what they are used to match in text searches.
grep is used to search for strings and regular expressions in files and outputs. It has options like -i for case-insensitive searching, -v to return non-matching lines, and -r for recursive searching. cut filters out fields or columns delimited by a character like a colon. sort sorts data alphabetically or numerically with options like -r to reverse the sort order. uniq searches for duplicate lines and has options like -c to output a count of occurrences. tr translates characters between two given sets on a character-to-character basis. tail and head print the end or beginning of a file, with options to specify the number of lines.
This document provides an overview of regular expressions (regex) in Unix systems. It discusses regex syntax and examples for matching filenames, text, and source code. It also summarizes common Unix commands that use regex like grep, sed, and awk. Sed is described as a stream editor for processing text using regex commands. Awk is a programming language useful for text processing and field extraction using regex patterns and actions. Examples are provided for using regex with these commands.
This document provides an overview of scripting and the shell. It discusses shell basics including editing commands and setting editing modes. It covers pipes and redirection, variables and quoting, common filter commands like cut, sort, uniq, and wc. It also discusses the tee, head, tail, grep, bash scripting, regular expressions, Perl programming including variables, arrays, regular expressions and input/output. Finally, it briefly introduces Python scripting.
This document provides an overview of scripting and shell basics. It discusses common shell commands like cut, sort, uniq, wc, head, tail, tee, grep and pipes. It also covers basic bash scripting, regular expressions, and an introduction to Perl programming. The document is presented as lecture notes covering key topics like shell editing commands, I/O redirection, variables, filters and options for commands.
This document provides an overview of various Linux text processing tools including:
- Tools for extracting text from files like head, tail, grep, cut
- Tools for analyzing text like wc, sort, diff, patch
- Tools for manipulating text like tr, sed, and awk
It describes the basic usage and functions of each tool along with examples.
The document discusses shell scripts, including what they are, their components, how to invoke them, get help, and definitions of terms. It provides examples of shell scripting concepts like arguments, regular expressions, quoting, variables, command forms, and simple commands. It also includes examples of shell scripts and proposes homework assignments involving shell scripts.
This document discusses text processing tools in Linux. It covers using regular expressions with grep, sed, and awk to search and manipulate text files. It also covers using vi to edit text files with features like search/replace, screen repositioning, and setting options. Common text processing tasks like examining, sorting, filtering, and formatting text are demonstrated using tools like head, tail, wc, sort, uniq, and awk.
The document discusses several common Unix command line utilities for text processing and file searching:
- find - Searches for files and directories based on various criteria like name, type, size, and modification time. Results can be piped to xargs to perform actions.
- grep - Searches files for text patterns. Has options for case-insensitive, recursive, and whole word searches.
- sed - Stream editor for modifying text, especially useful for find-and-replace. Can capture groups and perform transformations.
The document discusses UNIX shell scripts, including what they are, their components, how to invoke them, examples of arguments, variables, command forms, and simple commands that can be used in shell scripts. It provides examples of shell scripts that perform tasks like iterating through a string and checking for available disk space.
This document provides an overview of regular expressions and the grep command in Unix/Linux. It defines what regular expressions are, describes common regex patterns like characters, character classes, anchors, repetition, and groups. It also explains the differences between the grep, egrep, and fgrep commands and provides examples of using grep with regular expressions to search files.
This document discusses various topics related to using the UNIX shell including how to get and roll shells, shell commands like grep, awk, if statements, redirection, aliases and more. It provides code examples and explanations of different shells like bash, ksh, csh and how to perform common tasks from the shell.
This document discusses various topics related to using the UNIX shell for database administration tasks. It provides examples of common shell commands and techniques including piping, redirection, checking file attributes, conditional statements, loops, reading user input, finding open ports, common UNIX tools like grep, awk, and kill processes. It also discusses setting up aliases and choosing the right shell based on features.
The document provides information on various features and commands in the UNIX operating system. It discusses multi-user and multi-tasking capabilities, the building block approach, and the UNIX tool kit. It also describes locating commands, internal and external commands, command structure, general purpose utilities like cal, date, echo, and bc. The document outlines file types, file names, directory commands, file commands, permissions, and vi editor basics.
Regular expressions are used to match patterns in text and are commonly used in text filtering tools like grep, sed, awk, and perl. They use special characters to define character sets, anchor positions, and modifiers to specify repetition. Regular expressions must be quoted to avoid conflicts with shell metacharacters and can match patterns across lines of text.
This document provides an overview of the UNIX operating system. It begins with an introduction to UNIX, noting that it was developed in 1969 at Bell Labs and is a portable, multi-user, multi-tasking operating system. The document then covers the history of UNIX, its key features including multi-user capability and security, and common shells like Bourne shell. It also discusses common UNIX distributions, basic commands like ls, cat and cp, and includes a questions and answers section and references.
grep is a command line utility that searches for patterns in files. It searches the given file(s) for lines containing a match to the given strings or regular expressions and outputs the matching lines. By default, grep prints the matching lines. The name "grep" derives from a similar operation using the Unix/Linux text editor ed. grep uses regular expressions to search for text patterns that can match multiple characters and patterns in text. It has many options to customize searches, such as ignoring case, counting matches, inverting matches, and more.
The document discusses various Unix/Linux commands for text processing and file management. It describes the commands head, tail, tr, sort, cut, uniq, diff, tee, find, and grep. Head displays the first few lines of a file, tail displays the last few lines, and tr translates or deletes characters. Sort sorts the lines of a text file, cut removes sections from each line, and uniq removes duplicate lines from a sorted file. Diff finds differences between two files.
This document provides an overview of useful Bash one-liners and commands for tasks like manipulating text and files, working with variables and loops, remote access, and basic system utilities. It covers core Bash concepts like pipes, redirection, grep, awk, sort, and explains how to use commands while avoiding leaving traces on a system.
The document provides an overview of the Unix philosophy and basic Unix commands. It discusses that Unix programs should do one thing well, work together through text streams, and that common Unix commands take input from stdin and output to stdout. It then demonstrates and explains basic commands like echo, cat, tac, tee, sed, sort, and awk, as well as input/output redirection, pipes, job control, find, grep, xargs, and par.
The document is a preview of a comprehensive guide on using 'sed', a stream editor for filtering and transforming text. It covers various functionalities including line spacing, text conversion, selective printing, and line deletion, organized into sections with specific commands and examples. The preview includes the first 16 pages and directs readers to the full e-book for detailed instructions.
This document provides information on handling files under Unix. It discusses what files are, Unix filenames and conventions, and important Unix commands and tools for working with files, including cat, head, tail, cut, paste, uniq, tr, wc, sort, grep, egrep, fgrep, and tar. Special features like I/O redirection, piping, and standard files are also covered.
This document provides information on handling files under Unix. It discusses what files are, Unix filenames and conventions, and important Unix commands and tools for working with files, including cat, head, tail, cut, paste, uniq, tr, wc, sort, grep, egrep, fgrep, and tar. Special features like I/O redirection, piping, and standard files are also covered.
Low Power SI Class E Power Amplifier and Rf Switch for Health Careieijjournal
This research was to design a 2.4 GHz class E Power Amplifier (PA) for health care, with 0.18um Semiconductor Manufacturing International Corporation CMOS technology by using Cadence software. And also RF switch was designed at cadence software with power Jazz 180nm SOI process. The ultimate goal for such application is to reach high performance and low cost, and between high performance and low power consumption design. This paper introduces the design of a 2.4GHz class E power amplifier and RF switch design. PA consists of cascade stage with negative capacitance. This power amplifier can transmit 16dBm output power to a 50Ω load. The performance of the power amplifier and switch meet the specification requirements of the desired.
This document provides an overview of scripting and the shell. It discusses shell basics including editing commands and setting editing modes. It covers pipes and redirection, variables and quoting, common filter commands like cut, sort, uniq, and wc. It also discusses the tee, head, tail, grep, bash scripting, regular expressions, Perl programming including variables, arrays, regular expressions and input/output. Finally, it briefly introduces Python scripting.
This document provides an overview of scripting and shell basics. It discusses common shell commands like cut, sort, uniq, wc, head, tail, tee, grep and pipes. It also covers basic bash scripting, regular expressions, and an introduction to Perl programming. The document is presented as lecture notes covering key topics like shell editing commands, I/O redirection, variables, filters and options for commands.
This document provides an overview of various Linux text processing tools including:
- Tools for extracting text from files like head, tail, grep, cut
- Tools for analyzing text like wc, sort, diff, patch
- Tools for manipulating text like tr, sed, and awk
It describes the basic usage and functions of each tool along with examples.
The document discusses shell scripts, including what they are, their components, how to invoke them, get help, and definitions of terms. It provides examples of shell scripting concepts like arguments, regular expressions, quoting, variables, command forms, and simple commands. It also includes examples of shell scripts and proposes homework assignments involving shell scripts.
This document discusses text processing tools in Linux. It covers using regular expressions with grep, sed, and awk to search and manipulate text files. It also covers using vi to edit text files with features like search/replace, screen repositioning, and setting options. Common text processing tasks like examining, sorting, filtering, and formatting text are demonstrated using tools like head, tail, wc, sort, uniq, and awk.
The document discusses several common Unix command line utilities for text processing and file searching:
- find - Searches for files and directories based on various criteria like name, type, size, and modification time. Results can be piped to xargs to perform actions.
- grep - Searches files for text patterns. Has options for case-insensitive, recursive, and whole word searches.
- sed - Stream editor for modifying text, especially useful for find-and-replace. Can capture groups and perform transformations.
The document discusses UNIX shell scripts, including what they are, their components, how to invoke them, examples of arguments, variables, command forms, and simple commands that can be used in shell scripts. It provides examples of shell scripts that perform tasks like iterating through a string and checking for available disk space.
This document provides an overview of regular expressions and the grep command in Unix/Linux. It defines what regular expressions are, describes common regex patterns like characters, character classes, anchors, repetition, and groups. It also explains the differences between the grep, egrep, and fgrep commands and provides examples of using grep with regular expressions to search files.
This document discusses various topics related to using the UNIX shell including how to get and roll shells, shell commands like grep, awk, if statements, redirection, aliases and more. It provides code examples and explanations of different shells like bash, ksh, csh and how to perform common tasks from the shell.
This document discusses various topics related to using the UNIX shell for database administration tasks. It provides examples of common shell commands and techniques including piping, redirection, checking file attributes, conditional statements, loops, reading user input, finding open ports, common UNIX tools like grep, awk, and kill processes. It also discusses setting up aliases and choosing the right shell based on features.
The document provides information on various features and commands in the UNIX operating system. It discusses multi-user and multi-tasking capabilities, the building block approach, and the UNIX tool kit. It also describes locating commands, internal and external commands, command structure, general purpose utilities like cal, date, echo, and bc. The document outlines file types, file names, directory commands, file commands, permissions, and vi editor basics.
Regular expressions are used to match patterns in text and are commonly used in text filtering tools like grep, sed, awk, and perl. They use special characters to define character sets, anchor positions, and modifiers to specify repetition. Regular expressions must be quoted to avoid conflicts with shell metacharacters and can match patterns across lines of text.
This document provides an overview of the UNIX operating system. It begins with an introduction to UNIX, noting that it was developed in 1969 at Bell Labs and is a portable, multi-user, multi-tasking operating system. The document then covers the history of UNIX, its key features including multi-user capability and security, and common shells like Bourne shell. It also discusses common UNIX distributions, basic commands like ls, cat and cp, and includes a questions and answers section and references.
grep is a command line utility that searches for patterns in files. It searches the given file(s) for lines containing a match to the given strings or regular expressions and outputs the matching lines. By default, grep prints the matching lines. The name "grep" derives from a similar operation using the Unix/Linux text editor ed. grep uses regular expressions to search for text patterns that can match multiple characters and patterns in text. It has many options to customize searches, such as ignoring case, counting matches, inverting matches, and more.
The document discusses various Unix/Linux commands for text processing and file management. It describes the commands head, tail, tr, sort, cut, uniq, diff, tee, find, and grep. Head displays the first few lines of a file, tail displays the last few lines, and tr translates or deletes characters. Sort sorts the lines of a text file, cut removes sections from each line, and uniq removes duplicate lines from a sorted file. Diff finds differences between two files.
This document provides an overview of useful Bash one-liners and commands for tasks like manipulating text and files, working with variables and loops, remote access, and basic system utilities. It covers core Bash concepts like pipes, redirection, grep, awk, sort, and explains how to use commands while avoiding leaving traces on a system.
The document provides an overview of the Unix philosophy and basic Unix commands. It discusses that Unix programs should do one thing well, work together through text streams, and that common Unix commands take input from stdin and output to stdout. It then demonstrates and explains basic commands like echo, cat, tac, tee, sed, sort, and awk, as well as input/output redirection, pipes, job control, find, grep, xargs, and par.
The document is a preview of a comprehensive guide on using 'sed', a stream editor for filtering and transforming text. It covers various functionalities including line spacing, text conversion, selective printing, and line deletion, organized into sections with specific commands and examples. The preview includes the first 16 pages and directs readers to the full e-book for detailed instructions.
This document provides information on handling files under Unix. It discusses what files are, Unix filenames and conventions, and important Unix commands and tools for working with files, including cat, head, tail, cut, paste, uniq, tr, wc, sort, grep, egrep, fgrep, and tar. Special features like I/O redirection, piping, and standard files are also covered.
This document provides information on handling files under Unix. It discusses what files are, Unix filenames and conventions, and important Unix commands and tools for working with files, including cat, head, tail, cut, paste, uniq, tr, wc, sort, grep, egrep, fgrep, and tar. Special features like I/O redirection, piping, and standard files are also covered.
Low Power SI Class E Power Amplifier and Rf Switch for Health Careieijjournal
This research was to design a 2.4 GHz class E Power Amplifier (PA) for health care, with 0.18um Semiconductor Manufacturing International Corporation CMOS technology by using Cadence software. And also RF switch was designed at cadence software with power Jazz 180nm SOI process. The ultimate goal for such application is to reach high performance and low cost, and between high performance and low power consumption design. This paper introduces the design of a 2.4GHz class E power amplifier and RF switch design. PA consists of cascade stage with negative capacitance. This power amplifier can transmit 16dBm output power to a 50Ω load. The performance of the power amplifier and switch meet the specification requirements of the desired.
Microwatt is a lightweight, open-source core based on the OpenPOWER ISA.
It’s designed for FPGAs and easy experimentation in chip design.
Ideal for education, prototyping, and custom silicon development.
Fully open, it empowers developers to learn, modify, and innovate.
This covers traditional machine learning algorithms for classification. It includes Support vector machines, decision trees, Naive Bayes classifier , neural networks, etc.
It also discusses about model evaluation and selection. It discusses ID3 and C4.5 algorithms. It also describes k-nearest neighbor classifer.
Introduction to Natural Language Processing - Stages in NLP Pipeline, Challen...resming1
Lecture delivered in 2021. This gives an introduction to Natural Language Processing. It describes the use cases of NLP in daily life. It discusses the stages in NLP Pipeline. It highlights the challenges involved covering the different levels of ambiguity that could arise. It also gives a brief note on the present scenario with the latest language models, tools and frameworks/libraries for NLP.
OCS Group SG - HPHT Well Design and Operation - SN.pdfMuanisa Waras
This course is delivered as a scenario-based course to
provide knowledge of High Pressure and High-Temperature (HPHT) well design, drilling and completion operations. The course is specifically designed to provide an
understanding of the challenges associated with the design
and construction of HPHT wells. The course guides the
participants to work through the various well design
aspects starting from a geological well proposal with an
estimated formation pressure and temperature profile.
Working with real well data allows the participants to learn
not only theory, technicalities and practicalities of drilling
and completing HPHT wells but it also ensures that participants gain real experience in understanding the HPHT issues.
How Binning Affects LED Performance & Consistency.pdfMina Anis
🔍 What’s Inside:
📦 What Is LED Binning?
• The process of sorting LEDs by color temperature, brightness, voltage, and CRI
• Ensures visual and performance consistency across large installations
🎨 Why It Matters:
• Inconsistent binning leads to uneven color and brightness
• Impacts brand perception, customer satisfaction, and warranty claims
📊 Key Concepts Explained:
• SDCM (Standard Deviation of Color Matching)
• Recommended bin tolerances by application (e.g., 1–3 SDCM for retail/museums)
• How to read bin codes from LED datasheets
• The difference between ANSI/NEMA standards and proprietary bin maps
🧠 Advanced Practices:
• AI-assisted bin prediction
• Color blending and dynamic calibration
• Customized binning for high-end or global projects
Water demand - Types , variations and WDSdhanashree78
Water demand refers to the volume of water needed or requested by users for various purposes. It encompasses the water required for domestic, industrial, agricultural, public, and other uses. Essentially, it represents the overall need or quantity of water required to meet the demands of different sectors and activities.
Rigor, ethics, wellbeing and resilience in the ICT doctoral journeyYannis
The doctoral thesis trajectory has been often characterized as a “long and windy road” or a journey to “Ithaka”, suggesting the promises and challenges of this journey of initiation to research. The doctoral candidates need to complete such journey (i) preserving and even enhancing their wellbeing, (ii) overcoming the many challenges through resilience, while keeping (iii) high standards of ethics and (iv) scientific rigor. This talk will provide a personal account of lessons learnt and recommendations from a senior researcher over his 30+ years of doctoral supervision and care for doctoral students. Specific attention will be paid on the special features of the (i) interdisciplinary doctoral research that involves Information and Communications Technologies (ICT) and other scientific traditions, and (ii) the challenges faced in the complex technological and research landscape dominated by Artificial Intelligence.
Fundamentals of Digital Design_Class_21st May - Copy.pptxdrdebarshi1993
Ad
L5_regular expression command for linux unix
1. Regular expressions
Used by several different UNIX commands, including
ed, sed, awk, grep
A period ‘.’ matches any single characters
.X. matches any X that is surrounded by any two
characters
Caret character ^ matches the beginning of the line
^Bridgeport matches the characters Bridgeport only if
they occur at the beginning of the line
2. Regular expressions (continue.)
A dollar sign ‘$’ is used to match the end of the line
Bridgeport$ will match the characters Bridgeport only
they are the very last characters on the line
$ matches any single character at the end of the line
To match any single character, this character should be
preceded by a backslash ‘’ to remove the special
meaning
.$ matches any line end with a period
3. Regular expressions (continue.)
^$ matches any line that contains no characters
[…] is used to match any character enclosed in […]
[tT] matches a lower or upper case t followed
immediately by the characters
[A-Z] matches upper case letter
[A-Za-z] matches upper or lower case letter
[^A-Z] matches any character except upper case letter
[^A-Za-z] matches any non alphabetic character
4. Regular expressions (continue.)
(*) Asterisk matches zero or more characters
X* matches zero, one, two, three, … capital X’s
XX* matches one or more capital X’s
.* matches zero or more occurrences of any characters
e.*e matches all the characters from the first e in the
line to the last one
[A-Za-z] [A-Za-z] * matches any alphabetic character
followed by zero or more alphabetic character
5. Regular expressions (continue.)
[-0-9] matches a single dash or digit character
(ORDER IS IMPORTANT)
[0-9-] same as [-0-9]
[^-0-9] matches any alphabetic except digits and dash
[]a-z] matches a right bracket or lower case letter
(ORDER IS IMPORTANT)
6. Regular expressions (continue.)
{min, max} matches a precise number of characters
min specifies the minimum number of occurrences of the
preceding regular expression to be matched, and max
specifies the maximum
w{1,10} matches from 1 to 10 consecutive w’s
[a-zA-Z]{7} matches exactly seven alphabetic characters
7. Regular expressions (continue.)
X{5,} matches at least five consecutive X’s
(….) is used to save matched characters
^(.) matches the first character on the line and store it
into register one
There is 1-9 registers
To retrieve what is stored in any register n is used
Example: ^(.)1 matches the first two characters on a
line if they are both the same characters
8. Regular expressions (continue.)
^(.).*1$ matches all lines in which the first
character on the line is the same as the last.
Note (.*) matches all the characters in-between
^(…)(…) the first three characters on the line
will be stored into register 1 and the next three
characters into register 2
9. cut
$ who
bgeorge pts/16 Oct 5 15:01 (216.87.102.204)
abakshi pts/13 Oct 6 19:48 (216.87.102.220)
tphilip pts/11 Oct 2 14:10 (AC8C6085.ipt.aol.com)
$ who | cut -c1-8,18-
bgeorge Oct 5 15:01 (216.87.102.204)
abakshi Oct 6 19:48 (216.87.102.220)
tphilip Oct 2 14:10 (AC8C6085.ipt.aol.com)
$
Used in extracting various fields of data from a data file or the
output of a command
Format: cut -cchars file
chars specifies what characters to extract from each line of file.
10. cut (continue.)
Example: -c5, -c1,3,4 -c-10-15 -c5-
The –d and –f options are used with cut when you
have data that is delimited by a particular
character
Format: cut –ddchars –ffields file
dchar: delimiters of the fields (default: tab
character)
fields: fields to be extracted from file
11. cut (continue.)
$ cat phonebook
Edward 336-145
Alice 334-121
Sony 332-336
Robert 326-056
$ cut -f1 phonebook
Edward
Alice
Sony
Robert
$
15. paste (continue.)
Example:
$ cat students
Sue
Vara
Elvis
Luis
Eliza
$ cat sid
578426
452869
354896
455468
335123
$ paste students sid
Sue 578426
Vara 452869
Elvis 354896
Luis 455468
Eliza 335123
$
16. paste (continue.)
The option –s tells paste to paste together
lines from the same file not from alternate
files
To change the delimiter, -d option is used
17. paste (continue.)
Examples:
$ paste -d '+' students sid
Sue+578426
Vara+452869
Elvis+354896
Luis+455468
Eliza+335123
$ paste -s students
Sue Vara Elvis Luis Eliza
$ ls | paste -d ' ' -s -
addr args list mail memo name nsmail phonebook programs roster sid
students test tp twice user
$
18. sed
sed (stream editor) is a program used for editing
data
Unlike ed, sed can not be used interactively
Format: sed command file
command: applied to each line of the specified file
file: if no file is specified, then standard input is
assumed
sed writes the output to the standard output
s/Unix/UNIX command is applied to every line in
the file, it replaces the first Unix with UNIX
19. sed (continue.)
sed makes no changes to the original input file
‘s/Unix/UNIX/g’ command is applied to every line in the
file. It replaces every Unix with UNIX. “g” means global
With –n option, selected lines can be printed
Example: sed –n ’1,2p’ file which prints the first two
lines
Example: sed –n ‘/UNIX/p’ file, prints any line
containing UNIX
20. sed (continue.)
Example: sed –n ‘/1,2d/’ file, deletes lines 1 and 2
Example: sed –n’ /1’ text, prints all lines from
text,
showing non printing characters as nn and tab
characters as “>”
21. tr
The tr filter is used to translate characters from standard
input
Format: tr from-chars to-chars
Result is written to standard output
Example tr e x <file, translates every “e” in file to “x” and
prints the output to the standard output
The octal representation of a character can be given to “tr”
in the format nnn
Example: tr : ‘11’ will translate all : to tabs
22. tr (continue.)
Character Octal value
Bell 7
Backspace 10
Tab 11
New line 12
Linefeed 12
Form feed 14
Carriage return 15
Escape 33
23. tr (continue.)
Example: tr ‘[a-z]’’[A-Z]’ < file translate all lower
case letters in file to their uppercase equivalent.
The characters ranges [a-z] and [A-Z] are
enclosed in quotes to keep the shell from replacing
them with all files named from a through z and A
through Z
To “squeeze” out multiple occurrences of
characters the –s option is used
24. tr (continue.)
Example: tr –s ’ ’ ‘ ‘ < file will squeeze multiple spaces
to one space
The –d option is used to delete single characters from a
stream of input
Format: tr –d from-chars
Example: tr –d ‘ ‘ < file will delete all spaces from the
input stream
25. grep
Searches one or more files for a particular
characters patterns
Format: grep pattern files
Example: grep path .cshrc will print every line
in .cshrc file which has the pattern ‘path’ and print
it
Example: grep bin .cshrc .login .profile will print
every line from any of the three files .cshrc, .login
and .profile which has the pattern “bin”
26. grep (continue.)
Example : grep * smarts will give an
error because * will be substituted with
all file in the correct directory
Example : grep ‘*’ smarts
*
smarts
grep
arguments
27. sort
By default, sort takes each line of the specified input file and
sorts it into ascending order
$ cat students
Sue
Vara
Elvis
Luis
Eliza
$ sort students
Eliza
Elvis
Luis
Sue
Vara
$
28. sort (continue.)
The –n option tells sort to eliminate
duplicate lines from the output
29. sort (continue.)
$ echo Ash >> students
$ cat students
Sue
Vara
Elvis
Luis
Eliza
Ash
Ash
$ sort students
Ash
Ash
Eliza
Elvis
Luis
Sue
Vara
30. sort (continue.)
The –s option reverses the order of the sort
The –o option is used to direct the input from the
standard output to file
sort students > sorted_students works as sort
students –o sorted_students
The –o option allows to sort file and saves the output
to the same file
Example:
sort students –o students correct
sort students > students incorrect
31. sort (continue.)
• The –n option specifies the first field for sort
as number and data to sorted arithmetically
33. sort (continue.)
To sort by the second field +1n should be used
instead of n. +1 says to skip the first field
+5n would mean to skip the first five fields on
each line and then sort the data numerically
35. uniq
Used to find duplicate lines in a file
Format: uniq in_file out_file
uniq will copy in_file to out_file removing
any duplicate lines in the process
uniq’s definition of duplicated lines are
consecutive-occurring lines that match
exactly
36. uniq (continue.)
$ cat students
Sue
Vara
Elvis
Luis
Eliza
Ash
Ash
$ uniq students
Sue
Vara
Elvis
Luis
Eliza
Ash
$
The –d option is used to list duplicate lines
Example:
38. References
UNIX SHELLS BY EXAMPLE BY ELLIE
QUIGLEY
UNIX FOR PROGRAMMERS AND USERS BY
G. GLASS AND K ABLES
UNIX SHELL PROGRAMMING BY S.
KOCHAN AND P. WOOD