Code Bug Fix: Convert command line cURL to PHP cURL

Original Source Link

I’ve never done any curl before so am in need of some help. I’ve tried to work this out from examples but cannot get my head around it!

I have a curl command that I can successfully run from a linux(ubuntu) command line that puts a file to a wiki through an api.

I would need to incorporate this curl command in a PHP script I’m building.

How can I translate this curl command so that it works in a PHP script?

curl -b cookie.txt -X PUT 
     --data-binary "@test.png" 
     -H "Content-Type: image/png"     
     "http://hostname/@api/deki/pages/=TestPage/files/=test.png" 
     -0

cookie.txt contains the authentication but I don’t have a problem putting this in clear text in the script as this will be run by me only.

@test.png must be a variable such as $filename

http://hostname/@api/deki/pages/=TestPage/files/= must be a variable such as $pageurl

Thanks for any help.

a starting point:

<?php

$pageurl = "http://hostname/@api/deki/pages/=TestPage/files/=";
$filename = "test.png";

$theurl = $pageurl . $filename;

$ch = curl_init($theurl);
curl_setopt($ch, CURLOPT_COOKIE, ...); // -b
curl_setopt($ch, CURLOPT_CUSTOMREQUEST, 'PUT'); // -X
curl_setopt($ch, CURLOPT_BINARYTRANSFER, TRUE); // --data-binary
curl_setopt($ch, CURLOPT_HTTPHEADER, ['Content-Type: image/png']); // -H
curl_setopt($ch, CURLOPT_HTTP_VERSION, CURL_HTTP_VERSION_1_0); // -0

...
?>

See also: http://www.php.net/manual/en/function.curl-setopt.php

You need …

curl-to-PHP : https://incarnate.github.io/curl-to-php/

“Instantly convert curl commands to PHP code”

Whicvhever cURL you have in command line, you can convert it to PHP with this tool:

https://incarnate.github.io/curl-to-php/

It helped me after long long hours of searching for a solution! Hope it will help you out too! Your solution is this:

// Generated by curl-to-PHP: http://incarnate.github.io/curl-to-php/
$ch = curl_init();

curl_setopt($ch, CURLOPT_URL, "http://hostname/@api/deki/pages/=TestPage/files/=test.png");
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
$post = array(
    "file" => "@" .realpath("test.png")
);
curl_setopt($ch, CURLOPT_POSTFIELDS, $post);
curl_setopt($ch, CURLOPT_CUSTOMREQUEST, "PUT");


$headers = array();
$headers[] = "Content-Type: image/png";
curl_setopt($ch, CURLOPT_HTTPHEADER, $headers);

$result = curl_exec($ch);
if (curl_errno($ch)) {
    echo 'Error:' . curl_error($ch);
}
curl_close ($ch);

Try this:

$cmd='curl -b cookie.txt -X PUT 
     --data-binary "@test.png" 
     -H "Content-Type: image/png"     
     "http://hostname/@api/deki/pages/=TestPage/files/=test.png" 
     -0';
exec($cmd,$result);

Unfortunately SO still doesn’t have CommonMark table markup. This is an autogenerated list of which curl commandline options might map onto which php CURLOPT_ constant:

Note that this only lists somewhat exact matches of –long options to similarly named CURLOPT_ constants. But it should give you enough hints on how to compare the curl --help output and the PHP curl_setopt() list.

the –libcurl option was added for this purpose, even though it makes a C program I think it should be fairly easy to translate to PHP

Using MYYN’s answer as a starting point, and this page as a reference on how to send POST data using PHP cURL, here is my suggestion (I am working on something very similar at the moment):

<?php

$pageurl = "http://hostname/@api/deki/pages/=TestPage/files/=";
$filename = "test.png";

$theurl = $pageurl.$filename;

$ch = curl_init($theurl);
curl_setopt($ch, CURLOPT_COOKIE, ...); // -b
curl_setopt($ch, CURLOPT_CUSTOMREQUEST, 'PUT'); // -X
curl_setopt($ch, CURLOPT_BINARYTRANSFER, TRUE); // --data-binary
curl_setopt($ch, CURLOPT_HTTPHEADER, ['Content-Type: image/png']); // -H
curl_setopt($ch, CURLOPT_HTTP_VERSION, CURL_HTTP_VERSION_1_0); // -0

$post = array("$filename"=>"@$filename");
curl_setopt($ch, CURLOPT_POSTFIELDS, $post); 
$response = curl_exec($ch);
?>

You can probably optimize the many curl_setopts with the use of a curl_setopt_array() call if you desire.

Better this. In one line.

$cmd='curl -b cookie.txt -X PUT --data-binary "@test.png" -H "Content-Type: image/png" "http://hostname/@api/deki/pages/=TestPage/files/=test.png" -0';
exec($cmd,$result);

Tagged : /

Server Bug Fix: How to mention the prefix directory path in CURL command

Original Source Link

I am new to Linux, i want to write the perl script to download the files from FTP sites but here i want to use curl command to download the files. which is working fine in wget command but not working with curl command.

The below coommand is downloading the file from SFTP servers, Here i have mentioned the SFTP username/password mentioned in the wgetrc_proxy file and mentioned the directory path where to download the DATA.zip(/hom1/sara/) in the my linux box.

WGETRC=/hom1/sara/wgetrc_proxy wget --directory-prefix=/hom1/sara/ ftp://67.125.134.122/out_files/DATA.ZIP

I tried the same scenario using CURL, but which is not working.

WGETRC=/hom1/sara/wgetrc_proxy curl --directory-prefix=/hom1/sara/ ftp://67.125.134.122/out_files/DATA.ZIP

wgetrc_proxy contains below things.

-sh-3.00$ cat wgetrc_proxy
netrc = off

login=aaaa
passwd=xxxx

dot_style=mega
timeout=180

What mistake i have done here, else missed any environment configuration. Please help me out to resolve this issue.

curl doesn’t support a wgetrc file or the same command line options as wget. Use man curl to get the complete list of available options.

This should do:

cd /hom1/sara/ && curl --max-time 180 --proxy aaaa:[email protected]$http_proxy ftp://67.125.134.122/out_files/DATA.ZIP

Tagged : / / / /

Server Bug Fix: Check SSL certificate used by an apache virtual host locally

Original Source Link

I just updated a certificate for a particular apache virtual host which is behind a load balancer. Restarted the virtualhost with the command httpd -k restart -f /etc/httpd/someweb.tx.com/conf/httpd.conf.

What’s the best way for me to check if the new certificate that was updated is the one in use on
the server? I tried using openssl s_client utility as below but it doesn’t seem to check the cert locally, instead it pulls the old certificate from my production site over dns.

openssl s_client -showcerts -servername example.com -connect someweb.tx.com:443

Please note that I have few more Virtual hosts running on 443 in this server whose certificates were unchanged thus the need to check the certificate update on that one particular virtual host.

Also I’m ready to try any tool/utility not just openssl s_client as long as my requirement can be addressed. Please advise.

Thanks in advance
-B

With openssl s_client, you set the SNI name with -servername. With the -connect you can specify the ip-adress (or hostname) and port. So this should do the trick:

openssl s_client -showcerts -servername example.com -connect localhost:443

Tagged : / / / /

Making Game: Where is the setting that tells which .sshknown_hosts for curl to use? (Windows)

Original Source Link

I am using curl for sftp transfer but with Windows 10 version 1803 (and earlier, with insider build 17063) it is shipped as a built in command. So when i run my custom installed version, it no longer reads my current known_host files.

i am trying to upload a file to a secure location:

CD C:Program FilescURLbin  
curl.exe --verbose  --upload-file "c:userstesttest.txt" sftp://safe.ftp-server.org/incoming/account
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0*   Trying 000.000.000.000:22...
* Connected to elink-sftp-u.bankofamerica.com (000.000.000.000) port 22 (#0)
* Failed to read known hosts from C:UserstestAppDataRoaming/.ssh/known_hosts
* Did not find host safe.ftp-server.org in C:UserstestAppDataRoaming/.ssh/known_hosts
* SSH MD5 fingerprint: [***SECRET-GUID***]
* SSH host check: 2, key: <none>
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
* Closing connection 0
curl: (60) SSL peer certificate or SSH remote key was not OK
More details here: https://curl.haxx.se/docs/sslcerts.html

curl failed to verify the legitimacy of the server and therefore could not
establish a secure connection to it. To learn more about this situation and
how to fix it, please visit the web page mentioned above.

NOTE: i have masked secure data.

Tagged : / /

Server Bug Fix: curl: (25) Failed FTP upload: 553 to vsftpd docker

Original Source Link

I’m running you container and try to send files using curl but it fails.

Running the container

export FTP_USER="test"
export FTP_PASSWORD="test"

docker run 
    --name mock_ftp_server 
    --publish 21:21 
    --publish 4559-4564:4559-4564 
    --env FTP_USER="$FTP_USER" 
    --env FTP_PASSWORD="$FTP_PASSWORD" 
    --detach 
  panubo/vsftpd

Sending file

$ curl --upload-file /tmp/mock.data-2017-03-28.tar.gz ftp://localhost --user $FTP_USER:$FTP_PASSWORD
% Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                               Dload  Upload   Total   Spent    Left  Speed
0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
curl: (25) Failed FTP upload: 553

Question

What’s the matter here? Do I need to add something?

related

Based on VSFTPD 553 error: could not create file on AskUbuntu I fixed it by changing the owner of the root directory (/srv/) to the FTP user ftp:

docker run …
docker exec mock_ftp_server chown ftp:ftp -R /srv/
curl …

I’m waiting for information about security for this solution.

Tagged : / / / /

Server Bug Fix: Curl: disable certificate verification

Original Source Link

I am developing and I need to access https://localhost. I know the certificate will not match. I just want curl to ignore that. Currently it gives me the following error message:

curl: (51) SSL peer certificate or SSH remote key was not OK

Is it possible to tell curl to perform the access anyway?

Yeah, you can do that. From curl --help or man curl:

-k, --insecure

(SSL) This option explicitly allows curl to perform “insecure” SSL
connections and transfers. All SSL connections are attempted to be
made secure by using the CA certificate bundle installed by default.
This makes all connections considered “insecure” fail unless -k,
–insecure
is used.

See this online resource for further details:
http://curl.haxx.se/docs/sslcerts.html

curl -k or curl –insecure does NOT fix this particular error condition:
“curl: (51) SSL peer certifcate

Tagged : /

Math Genius: How to simplify the curl of a second-order-tensor times a vector.>

Original Source Link

I have the following equation:
$$ nabla times (sigma E) $$
where $sigma$ is a second-order tensor and $E$ is a vector.

I do have another equation I can substitute for $nabla times E$.

How do I simplify this to terms with $nabla times E$, so without $sigma$ inside the curl?

Tagged : / / /

Server Bug Fix: HTTPS connection to specific sites fail with cURL on macOS

Original Source Link

On my Mac, HTTPS connnections to certain sites fail using the built-in curl binary of macOS 10.14. They work fine with different browsers, as well as other builds of cURL on the same system. One of the affected sites is https://kapeli.com/, the download site for the utility “Dash”. Another one is https://electroncash.org.

cURL complains about an expired certificate:

curl: (60) SSL certificate problem: certificate has expired
More details here: https://curl.haxx.se/docs/sslcerts.html

I encountered this during installations with Homebrew-Cask, which uses macOS’ built-in cURL to download software.

Root CA certificates used by the mentioned sites (Comodo and USERTrust) have expired this morning (UTC time). While I find it remarkable that two different Root CA certs would expire at the exact same second, this may be explained by USERTrust being affiliated with Comodo (now Sectigo).


Edit: These two actually never were Root CA certificates, but rather intermediate CAs signed by “AddTrust External CA Root”. Therefore, their expiration date was determined by the validity of the “AddTrust External CA Root” certificate, which also happens to have expired at the exact same second.


Now, updated certificates (sharing their private keys with the expired ones) have been issued back in 2010 (Comodo, USERTrust). These certificates are part of the common Root CA stores these days (including Apple’s system trust store), therefore browsers establish the connections perfectly fine. The same is true for most variants of cURL (e.g. from MacPorts or Homebrew), which are built against custom OpenSSL installations.

The built-in cURL variant of macOS 10.14 is built against LibreSSL and uses /etc/ssl/cert.pem as its Root CA store, which also includes the new certificates. However, something appears to causing cURL or LibreSSL to prefer the old certificates for its validity check. I suppose cURL is at least somewhat involved in the problem, since I couldn’t get the connections to fail using /usr/bin/openssl s_client (/usr/bin/openssl is actually built from LibreSSL).

My hypothesis would be that the problem is caused by the sites sending the expired Root CA certificate as part of their certificate chain. Including the Root CA in such chains is allowed, but not required, and in this case appears to break certificate validation.


Edit: This is part of a series of issues around the “AddTrust External CA Root” expiration. See this blog post by Andrew Ayer or this Twitter thread by Ryan Sleevi for the bigger picture. Ryan Sleevi also has a collection of things failing due to the expiration.

On macOS 10.15, where cURL uses OpenSSL 0.9.8 by default, the issue apparently may be mitigated by setting the environment variable CURL_SSL_BACKEND=secure-transport. This does not work on 10.14 with its LibreSSL which, according to Christian Heimes, is affected by the issue in general.

Just comment out the AddTrust entry in /etc/ssl/cert.pem, since the end certificates are cross-signed, they will be validated against USERTrust.

In theory there should be no need to comment out that entry, but in practice, the LibreSSL version that ships with mac (2.8.3 on Catalina) has broken certificate path validation because it’s based on an older version of OpenSSL that also contains the same bug (< 1.1.1).

According to the LibreSSL documentation (https://www.libressl.org/releases.html), they started incorporating OpenSSL 1.1.1 functionality in their 3.x.x series, I could find a way to update it manually but I am lazy and will wait for Apple to fix it.

All these sites that I have found seems to have the same expired CA cert in their chain:

openssl s_client -connect kapeli.com:443
CONNECTED(00000003)
depth=3 C = SE, O = AddTrust AB, OU = AddTrust External TTP Network, CN = AddTrust External CA Root
verify error:num=10:certificate has expired
notAfter=May 30 10:48:38 2020 GMT

I see issues popping up on many different sites about this right now. Editing the ca cert file like @jmibanez suggests will probably work when the site isn’t sending the expired certificate in the response. I tried the latest CA cert file from https://curl.haxx.se/ca/cacert.pem using curl --cacert path/to/cacert.pem which didn’t work. Browsers seems fine so they seem to ignore the expired CA cert included in responses from web sites.

EDIT: My bad here. I was using curl 7.54 by mistake. Newer versions are working. The error does not exist when using curl 7.67/7.70.

At least a workaround for macOS 10.15.4:

I encountered the same issue today in conjunction with the codecov bash script. My quick fix: brew install curl and do what brew link curl suggest. You can check if you’ve picked the right curl with which curl (should point to /usr/local/opt/curl/bin/curl).

I have no time or patience to wait for 🍏 to fix those things.

I had to fix this issue on a debian based server

here is how it went:

  1. remove AddTrust_External_Root.crt from your system (usually found in /etc/ssl/certs)
    1. remove or comment the “mozilla/AddTrust_External_Root” line from /etc/ca-certificates.conf
    2. run sudo update-ca-certificates to update the certificates used by openssl

may it can help you ?

Tagged : / / /

Linux HowTo: What is “curl: (56) SSL read: error:00000000:lib(0):func(0):reason(0), errno 73” telling me?

Original Source Link

What does this cURL error mean and where can I find more related information?

curl: (56) SSL read: error:00000000:lib(0):func(0):reason(0), errno 73

I’m writing a shell script to query the Splunk API. In some cases, after 5 minutes, I get this error. Sometimes I can rerun the script and the error goes away and I get my desired output.

Here is what I see on my terminal.

  % Total    % Received % Xferd  Average Speed          Time             Curr.
                                 Dload  Upload Total    Current  Left    Speed
  0     0    0     0    0     0      0      0 --:--:--  0:05:01 --:--:--     0
curl: (56) SSL read: error:00000000:lib(0):func(0):reason(0), errno 73

A sample search is this. I’ll substitute “spock” for a hosthame. This search does succeed on other hosts, so this specific search isn’t somehow incorrect and causing the problem.

search index=os_nix host=spock source=/var/adm/messages latest=-30d NOT snmpd authentication (error OR fail OR failure) | head

I am building my command like this.

URLPROTO='https://'
URLHOST='splunkapi.example.com'
URLPORT=':8089'
URLDIR='/servicesNS/admin/search/search/jobs/export'
URL="${URLPROTO}${URLHOST}${URLPORT}${URLDIR}"
luser=(read from user input)
lpassword=(read from user input)
OUTFILE=(generated from hostname and the type of search I'm running) 
mySEARCH=(read from input file)
USER=(User ID read from environment)

The actually command is this.

curl -k -o "${OUTFILE}" -u ${luser:=${USER}}:${lpasswd} ${URL} -d search="${mySEARCH}" -d output_mode="csv"

My script loops through a list of hosts, read from an external file, performing several searches, read from a different external file, against each host. To clarify further, I can run my script and I get two or three failures out of a total of thirty total searches. The entire run doesn’t fail, just two or three individual host/search pairs. The failure isn’t limited to a specific host/search pair or a specific host or a specific search, as I can rerun my script and the failures stand a good chance of succeeding.

What is the cURL error trying to tell me?

If it matters, this is AIX.

curl 7.11.1 (powerpc-ibm-aix5.2.0.0) libcurl/7.11.1 OpenSSL/0.9.7g ipv6
Protocols: ftp gopher telnet dict ldap http file https ftps
Features: IPv6 SSL NTLM Largefile

The names associated with the errno “error numbers” should be in the file errno.h usually stashed somewhere under /usr/include, though a webby search turns up:

http://www.ioplex.com/~miallen/errcmp.html

Which for 73 and AIX is “Connection reset by peer”. So for some reason the peer (or something between the client and the peer) reset the connection.

Curl reports these errors because Splunk doesn’t return any data. (My search returned data for most hosts but a time-specific window would cause it to occasionally fail.)

Splunk times out after 5 minutes and disconnects curl, errno 73. The curl exit code (56) is “Failure in receiving network data.” Splunk doesn’t send anything, so no data to receive.

Thank you both @thrig and @OscarAkaElvis for assisting.

Tagged : /