×

Search anything:

Curl command in Linux

Internship at OpenGenus

Get this book -> Problems on Array: For Interviews and Competitive Programming

In this article we discuss the curl command-line utility, a tool used for the transfer of data to and from a server.

Table of contents.

  1. Introduction.
  2. Installation.
  3. Syntax.
  4. Downloading files.
  5. Resuming downloads.
  6. Progess bar.
  7. Redirection.
  8. Limiting bandwidth.
  9. FTP upload and download.
  10. Cookies.
  11. Proxies.
  12. HTTP headers.
  13. Summary.
  14. References.

Introduction.

Curl is a command line utility used for transferring data to and from a server.
It supports HTTP, FTP, IMAP, POP3, SCP, SFTP, SMTP, TFTp, telnet, LADP protocols.

Installation.

Curl is usually pre-installed on most linux distributions however if that is not the case you can install with with the following commands.

Installing on debian and ubuntu

sudo apt update
sudo apt install curl

Installing on CentOS and Fedora

sudo yum install curl

Note: curl can also be accessed by using git bash in windows operating system.

Syntax.

curl [options] [URL]

The most basic curl command is as follows.

curl www.google.com > output.txt

From the command we get the source code of google.com and redirect the output html into output.txt file.
We can also use the o option followed by the file name to save the output of the curl command.

curl -o output.txt webpage.com

Since no protocol is specified curl defaults to use HTTP.

Downloading files.

With curl we can download files from a server as follows.

curl -O https://serveraddress.com/index.html

The -O option is used so as to save the downloaded file with its original name, in this case the file will be saved as index.html.
We can use -o option to save it with a predefined nameas described in the previous section.

We can also download multiple files as follows.

curl -O https://serveraddress1.com/index.html
     -O https://serveraddress1.com/README.md
     -O https://serveraddress3.com/index.js

Here we download three files, index.html, index.js and README.md and store them with their original names.

Resuming downloads.

Assuming we are downloading a large file and during the process we loose internet connection,with curl we can use the -C - option so as to resume the file download.
An example

We were downloading an operating system iso file, say ubuntu, usually 2gb in size.

curl -O https://fileurl.com/image.iso

And suddenly we are disconnected from the internet but later connected.
We can resume the download by wrting the following,

curl -C - -O https://fileurl.com/image.iso

Progess bar.

We can view how a download or upload is progressing by using the -# option.

curl -# -o output.txt www.google.com

To disable progress bar we can use the --silent option.

curl --silent -o output.txt www.google.com

Redirection.

Assuming we follow an address and find that the webpage has been moved, for example, we try to access google.com on a web browser, we will be redirected to the https://www.google.com/.

Curl however does not follow the HTTP location header by default, that is when a request is sent to a website, a HTTP location header is sent back as a response(as we shall see in the headers section), to instruct curl to follow this redirect we use the -L option.

An example

curl google.com

Output

<HTML><HEAD><meta http-equiv="content-type" content="text/html;charset=utf-8">
<TITLE>301 Moved</TITLE></HEAD><BODY>
<H1>301 Moved</H1>
The document has moved
<A HREF="http://www.google.com/">here</A>.
</BODY></HTML>

For redirection

curl -L google.com

Now even if we miss the correct web page we will be redirected to the correct url by following the redirection link returned as a response.

Limiting bandwidth.

The --limit-rate option allows for limiting data transfer rate.
This value can be expressed in bytes, kilobytes k, megabytes m and gigabytes g.

An example

curl --limit-rate 1m -o output.txt https://www.webpage.com/doc.csv

The above command limits download speed to 1mb.

FTP upload and download.

We can use curl to access protected files on an FTP server, the -u option is used to specify the authentication details(username, password).

curl -u username:password ftp://ftp.fileserver.com/

To download files from the FTP server we can write.

curl -u username:password ftp://ftp.fileserver.com/file

To upload files to a ftp server we write.

curl -T newfile -u username:password ftp://ftp.fileserver.com

Cookies.

We may need to make a HTTP request with specific cookies so as to access remote resources.
We can use the b option followed by a file name containing the cookies or a string representing the cookie.

An example

curl -L -b "cookievalue=value" -O http://webpage.com/resource

Proxies.

A proxy server is a computer that accepts incoming requests from a client and forwards them to the destination server, It works as a gateway between the end-user and the internet. Proxies find applications in many areas such as improving security, load balancing internet traffic, access control on websites and much more.
Curl supports HTTP, HTTPS, and SOCKS proxies. To transfer data through a proxy we use the -x option followed by the proxy URL.
An example

curl -x 192.168.16.4:8888 http://example.com

The above command downloads the specified resource using the proxy 192.168.16.4 on port 8888.

Proxy servers sometimes will require users to be authenticated, for authentication we curl, the -u option is used followed by the authentication details as shown.

curl -u username:password -x 192.168.16.4:8888 http://example.com

HTTP Headers.

Http headers are the core components of http requests and responses, they carry information about client browser, the server and the requested page.
We can use curl to display a header information as we retrieve a webpage as follows,

curl -I www.google.com

Summary.

Curl is designed to work without user interaction, and therefore can be used for shell scripting.

Curl command is also useful in testing APIs, instead of waiting for postman or thunderclient or any other client for that matter, one can just use a terminal to test API endpoints which is more efficient as it doesn't use up alot of a computer's resources.

References.

  1. curl --manual - (windows git bash).
  2. man curl - (linux).
Curl command in Linux
Share this