sobota, 9 kwietnia 2016

Omsa - replacing disk

root@hostname.example:/root# megacli -PDList -aALL

Enclosure Device ID: 32
Slot Number: 23
Device Id: 23
Sequence Number: 2
Media Error Count: 0
Other Error Count: 0
Predictive Failure Count: 5
Last Predictive Failure Event Seq Number: 20030
PD Type: SAS
Raw Size: 1.090 TB [0x8bba0cb0 Sectors]
Non Coerced Size: 1.090 TB [0x8baa0cb0 Sectors]
Coerced Size: 1.090 TB [0x8ba80000 Sectors]
Firmware state: Online
SAS Address(0): 0x5000c5006c2a5101
SAS Address(1): 0x0
Connected Port Number: 0(path0)
Inquiry Data: SEAGATE ST1200MM0007    IS04S3L04V8L           
FDE Capable: Not Capable
FDE Enable: Disable
Secured: Unsecured
Locked: Unlocked
Foreign State: None
Device Speed: 6.0Gb/s
Link Speed: 6.0Gb/s
Media Type: Hard Disk Device


root@hostname.example:/root# /opt/dell/srvadmin/bin/omreport storage pdisk controller=0

root@hostname.example:/root# /opt/dell/srvadmin/bin/omreport storage pdisk vdisk=22 controller=0
List of Physical Disks belonging to disc22

Controller PERC H710P Mini (Embedded)
ID                              : 0:1:23
Status                          : Non-Critical
Name                            : Physical Disk 0:1:23
State                           : Online
Power Status                    : Spun Up
Bus Protocol                    : SAS
Media                           : HDD
Part of Cache Pool              : Not Applicable
Remaining Rated Write Endurance : Not Applicable
Failure Predicted               : Yes
Revision                        : IS04
Driver Version                  : Not Applicable
Model Number                    : Not Applicable
T10 PI Capable                  : No
Certified                       : Yes
Encryption Capable              : No
Encrypted                       : Not Applicable
Progress                        : Not Applicable
Mirror Set ID                   : Not Applicable
Capacity                        : 1,117.25 GB (1199638052864 bytes)
Used RAID Disk Space            : 1,117.25 GB (1199638052864 bytes)
Available RAID Disk Space       : 0.00 GB (0 bytes)
Hot Spare                       : No
Vendor ID                       : DELL(tm)
Product ID                      : ST1200MM0007
Serial No.                      : S3L04V8L
Part Number                     : CN0RMCP3726223AV00TBA00
Negotiated Speed                : 6.00 Gbps
Capable Speed                   : 6.00 Gbps
PCIe Negotiated Link Width      : Not Applicable
PCIe Maximum Link Width         : Not Applicable
Sector Size                     : 512B
Device Write Cache              : Not Applicable
Manufacture Day                 : 06
Manufacture Week                : 44
Manufacture Year                : 2013
SAS Address                     : 5000C5006C2A5101
Non-RAID Disk Cache Policy      : Not Applicable
Disk Cache Policy               : Not Applicable
Form Factor                     : Not Available
Sub Vendor                      : Not Available


root@hostname.example:/root# /opt/dell/srvadmin/bin/omreport storage pdisk vdisk=0 controller=0
List of Physical Disks belonging to Virtual Disk 0

Controller PERC H710P Mini (Embedded)
ID                              : 0:1:0
Status                          : Ok
Name                            : Physical Disk 0:1:0
State                           : Online
Power Status                    : Spun Up
Bus Protocol                    : SAS
Media                           : HDD
Part of Cache Pool              : Not Applicable
Remaining Rated Write Endurance : Not Applicable
Failure Predicted               : No
Revision                        : FS64
Driver Version                  : Not Applicable
Model Number                    : Not Applicable
T10 PI Capable                  : No
Certified                       : Yes
Encryption Capable              : No
Encrypted                       : Not Applicable
Progress                        : Not Applicable
Mirror Set ID                   : Not Applicable
Capacity                        : 278.88 GB (299439751168 bytes)
Used RAID Disk Space            : 278.88 GB (299439751168 bytes)
Available RAID Disk Space       : 0.00 GB (0 bytes)
Hot Spare                       : No
Vendor ID                       : DELL(tm)
Product ID                      : ST9300603SS
Serial No.                      : 6SE4AZS3
Part Number                     : CN0T871K7262216L0777A01
Negotiated Speed                : 6.00 Gbps
Capable Speed                   : 6.00 Gbps
PCIe Negotiated Link Width      : Not Applicable
PCIe Maximum Link Width         : Not Applicable
Sector Size                     : 512B
Device Write Cache              : Not Applicable
Manufacture Day                 : 05
Manufacture Week                : 25
Manufacture Year                : 2011
SAS Address                     : 5000C5003B728681
Non-RAID Disk Cache Policy      : Not Applicable
Disk Cache Policy               : Not Applicable
Form Factor                     : Not Available
Sub Vendor                      : Not Available

ID                              : 0:1:1
Status                          : Ok
Name                            : Physical Disk 0:1:1
State                           : Online
Power Status                    : Spun Up
Bus Protocol                    : SAS
Media                           : HDD
Part of Cache Pool              : Not Applicable
Remaining Rated Write Endurance : Not Applicable
Failure Predicted               : No
Revision                        : FS64
Driver Version                  : Not Applicable
Model Number                    : Not Applicable
T10 PI Capable                  : No
Certified                       : Yes
Encryption Capable              : No
Encrypted                       : Not Applicable
Progress                        : Not Applicable
Mirror Set ID                   : Not Applicable
Capacity                        : 278.88 GB (299439751168 bytes)
Used RAID Disk Space            : 278.88 GB (299439751168 bytes)
Available RAID Disk Space       : 0.00 GB (0 bytes)
Hot Spare                       : No
Vendor ID                       : DELL(tm)
Product ID                      : ST9300603SS
Serial No.                      : 6SE4DZYJ
Part Number                     : CN0T871K7262216L0274A01
Negotiated Speed                : 6.00 Gbps
Capable Speed                   : 6.00 Gbps
PCIe Negotiated Link Width      : Not Applicable
PCIe Maximum Link Width         : Not Applicable
Sector Size                     : 512B
Device Write Cache              : Not Applicable
Manufacture Day                 : 05
Manufacture Week                : 25
Manufacture Year                : 2011
SAS Address                     : 5000C5003B7100B9
Non-RAID Disk Cache Policy      : Not Applicable
Disk Cache Policy               : Not Applicable
Form Factor                     : Not Available
Sub Vendor                      : Not Available

wtorek, 28 lipca 2015

MongoDB replication with arbiter

One of my projects assumed to have database replication.
I'd used standard replication master-slave until last failure.
My master (PRIMARY) went down and slave (SECONDARY) became master (PRIMARY).
When "old" master went back it'd overtaken master from "new" one.
It was unacceptable for me because I ran everything on new master and it contains new data.
Solution was simple - use an arbiter.
It cost one more machine with mongo but with no data on board.

How to set up

1) Create configuration file on first node (master/PRIMARY) with replica set
root@mongo1:~# cat /etc/mongodb1.conf dbpath=/var/lib/mongodb1 logpath=/var/log/mongodb/mongodb1.log logappend=true bind_ip = mongo1 port = 17017 journal=true replSet = replication_test

In my case configuration will be almost the same on the other host( except bind_ip and port).

2) Start mongodb on first node.

root@mongo1:# mongod -f /etc/mongodb1.conf &

3) Log into mongo database and initialize replica.

root@mongo1:~# mongo --port 17017 MongoDB shell version: 2.0.6 connecting to: 127.0.0.1:17017/test > rs.config() null > rs.status() { "startupStatus" : 3, "info" : "run rs.initiate(...) if not yet done for the set", "errmsg" : "can't get local.system.replset config from self or any seed (EMPTYCONFIG)", "ok" : 0 } > rs.initiate(); { "info2" : "no configuration explicitly specified -- making one", "me" : "mongo1:17017", "info" : "Config now saved locally. Should come online in about a minute.", "ok" : 1 } > rs.status() { "set" : "replication_test", "date" : ISODate("2015-07-27T14:30:34Z"), "myState" : 2, "members" : [ { "_id" : 0, "name" : "mongo1:17017", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "optime" : { "t" : 1438007418000, "i" : 1 }, "optimeDate" : ISODate("2015-07-27T14:30:18Z"), "self" : true } ], "ok" : 1 } SECONDARY> rs.status() { "set" : "replication_test", "date" : ISODate("2015-07-27T14:31:01Z"), "myState" : 1, "members" : [ { "_id" : 0, "name" : "mongo1:17017", "health" : 1, "state" : 1, "stateStr" : "PRIMARY", "optime" : { "t" : 1438007418000, "i" : 1 }, "optimeDate" : ISODate("2015-07-27T14:30:18Z"), "self" : true } ], "ok" : 1 } PRIMARY> rs.config() { "_id" : "replication_test", "version" : 1, "members" : [ { "_id" : 0, "host" : "mongo1:17017" } ] }

4) Log into second node and start mongod:

root@mongo2:~# mongod -f /etc/mongodb2.conf & cat /var/log/mongodb/mongodb2.log Mon Jul 27 14:32:55 [initandlisten] MongoDB starting : pid=2650 port=27017 dbpath=/var/lib/mongodb2 64-bit host=mongo Mon Jul 27 14:32:55 [initandlisten] Mon Jul 27 14:32:55 [initandlisten] ** WARNING: You are running on a NUMA machine. Mon Jul 27 14:32:55 [initandlisten] ** We suggest launching mongod like this to avoid performance problems: Mon Jul 27 14:32:55 [initandlisten] ** numactl --interleave=all mongod [other options] Mon Jul 27 14:32:55 [initandlisten] Mon Jul 27 14:32:55 [initandlisten] db version v2.0.6, pdfile version 4.5 Mon Jul 27 14:32:55 [initandlisten] git version: nogitversion Mon Jul 27 14:32:55 [initandlisten] build info: Linux z6 3.8-trunk-amd64 #1 SMP Debian 3.8.3-1~experimental.1 x86_64 BOOST_LIB_VERSION=1_49 Mon Jul 27 14:32:55 [initandlisten] options: { bind_ip: "mongo", config: "/etc/mongodb2.conf", dbpath: "/var/lib/mongodb2", journal: "true", logappend: "true", logpath: "/var/log/mongodb/mongodb2.log", port: 27017, replSet: "replication_test" } Mon Jul 27 14:32:55 [initandlisten] journal dir=/var/lib/mongodb2/journal Mon Jul 27 14:32:55 [initandlisten] recover : no journal files present, no recovery needed Mon Jul 27 14:32:55 [initandlisten] waiting for connections on port 27017 Mon Jul 27 14:32:55 [websvr] admin web console waiting for connections on port 28017 Mon Jul 27 14:32:55 [initandlisten] connection accepted from 10.0.0.3:42241 #1 Mon Jul 27 14:32:55 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG) Mon Jul 27 14:32:55 [rsStart] replSet info you may need to run replSetInitiate -- rs.initiate() in the shell -- if that is not already done

5) On first node ( PRIMARY) add secondary node:

PRIMARY> rs.add("mongo2:27017") { "ok" : 1 } PRIMARY> rs.status() { "set" : "replication_test", "date" : ISODate("2015-07-27T14:35:31Z"), "myState" : 1, "members" : [ { "_id" : 0, "name" : "mongo1:17017", "health" : 1, "state" : 1, "stateStr" : "PRIMARY", "optime" : { "t" : 1438007724000, "i" : 1 }, "optimeDate" : ISODate("2015-07-27T14:35:24Z"), "self" : true }, { "_id" : 1, "name" : "mongo2:27017", "health" : 1, "state" : 3, "stateStr" : "RECOVERING", "uptime" : 7, "optime" : { "t" : 0, "i" : 0 }, "optimeDate" : ISODate("1970-01-01T00:00:00Z"), "lastHeartbeat" : ISODate("2015-07-27T14:35:30Z"), "pingMs" : 21545440, "errmsg" : "initial sync need a member to be primary or secondary to do our initial sync" } ], "ok" : 1 } PRIMARY> rs.status() { "set" : "replication_test", "date" : ISODate("2015-07-27T14:36:32Z"), "myState" : 1, "members" : [ { "_id" : 0, "name" : "mongo1:17017", "health" : 1, "state" : 1, "stateStr" : "PRIMARY", "optime" : { "t" : 1438007724000, "i" : 1 }, "optimeDate" : ISODate("2015-07-27T14:35:24Z"), "self" : true }, { "_id" : 1, "name" : "mongo2:27017", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 68, "optime" : { "t" : 1438007724000, "i" : 1 }, "optimeDate" : ISODate("2015-07-27T14:35:24Z"), "lastHeartbeat" : ISODate("2015-07-27T14:36:30Z"), "pingMs" : 26670 } ], "ok" : 1 }

6) Run arbiter on the third node and add it to exist infrastructure using PRIMARY node:

root@mongo3:~# mongod -f /etc/mongodb3.conf & root@mongo1:~# mongo --port 17017 PRIMARY> rs.addArb("mongo3:37017"); { "ok" : 1 } PRIMARY> rs.status(); { "set" : "replication_test", "date" : ISODate("2015-07-27T14:39:09Z"), "myState" : 1, "members" : [ { "_id" : 0, "name" : "mongo1:17017", "health" : 1, "state" : 1, "stateStr" : "PRIMARY", "optime" : { "t" : 1438007937000, "i" : 1 }, "optimeDate" : ISODate("2015-07-27T14:38:57Z"), "self" : true }, { "_id" : 1, "name" : "mongo2:27017", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 225, "optime" : { "t" : 1438007937000, "i" : 1 }, "optimeDate" : ISODate("2015-07-27T14:38:57Z"), "lastHeartbeat" : ISODate("2015-07-27T14:39:08Z"), "pingMs" : 0 }, { "_id" : 2, "name" : "mongo3:37017", "health" : 1, "state" : 5, "stateStr" : "STARTUP2", "uptime" : 12, "optime" : { "t" : 0, "i" : 0 }, "optimeDate" : ISODate("1970-01-01T00:00:00Z"), "lastHeartbeat" : ISODate("2015-07-27T14:39:07Z"), "pingMs" : 0 } ], "ok" : 1 } PRIMARY> rs.status(); { "set" : "replication_test", "date" : ISODate("2015-07-27T14:39:14Z"), "myState" : 1, "members" : [ { "_id" : 0, "name" : "mongo1:17017", "health" : 1, "state" : 1, "stateStr" : "PRIMARY", "optime" : { "t" : 1438007937000, "i" : 1 }, "optimeDate" : ISODate("2015-07-27T14:38:57Z"), "self" : true }, { "_id" : 1, "name" : "mongo2:27017", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 230, "optime" : { "t" : 1438007937000, "i" : 1 }, "optimeDate" : ISODate("2015-07-27T14:38:57Z"), "lastHeartbeat" : ISODate("2015-07-27T14:39:12Z"), "pingMs" : 0 }, { "_id" : 2, "name" : "mongo3:37017", "health" : 1, "state" : 7, "stateStr" : "ARBITER", "uptime" : 17, "optime" : { "t" : 0, "i" : 0 }, "optimeDate" : ISODate("1970-01-01T00:00:00Z"), "lastHeartbeat" : ISODate("2015-07-27T14:39:13Z"), "pingMs" : 0 } ], "ok" : 1 } PRIMARY> rs.config() { "_id" : "replication_test", "version" : 3, "members" : [ { "_id" : 0, "host" : "mongo1:17017" }, { "_id" : 1, "host" : "mongo2:27017" }, { "_id" : 2, "host" : "mongo3:37017", "arbiterOnly" : true } ] }

Let's see what happen if I kill PRIMARY:

root@mongo2:~# mongo --port 27017 MongoDB shell version: 2.0.6 connecting to: 127.0.0.1:27017/test PRIMARY> rs.status() { "set" : "replication_test", "date" : ISODate("2015-07-27T14:41:33Z"), "myState" : 1, "syncingTo" : "mongo1:17017", "members" : [ { "_id" : 0, "name" : "mongo1:17017", "health" : 0, "state" : 8, "stateStr" : "(not reachable/healthy)", "uptime" : 0, "optime" : { "t" : 1438007937000, "i" : 1 }, "optimeDate" : ISODate("2015-07-27T14:38:57Z"), "lastHeartbeat" : ISODate("2015-07-27T14:41:13Z"), "pingMs" : 0, "errmsg" : "socket exception" }, { "_id" : 1, "name" : "mongo2:27017", "health" : 1, "state" : 1, "stateStr" : "PRIMARY", "optime" : { "t" : 1438007937000, "i" : 1 }, "optimeDate" : ISODate("2015-07-27T14:38:57Z"), "self" : true }, { "_id" : 2, "name" : "mongo3:37017", "health" : 1, "state" : 7, "stateStr" : "ARBITER", "uptime" : 154, "optime" : { "t" : 0, "i" : 0 }, "optimeDate" : ISODate("1970-01-01T00:00:00Z"), "lastHeartbeat" : ISODate("2015-07-27T14:41:33Z"), "pingMs" : 0 } ], "ok" : 1 }

As we assumed SECONDARY becomes PRIMARY.

And if I start old PRIMARY again ?

root@mongo1:~# mongod -f /etc/mongodb1.conf &

Yes, it'll be SECONDARY.

root@mongo2:~# mongo --port 27017 MongoDB shell version: 2.0.6 connecting to: 127.0.0.1:27017/test PRIMARY> rs.status(); { "set" : "replication_test", "date" : ISODate("2015-07-27T14:42:53Z"), "myState" : 1, "syncingTo" : "mongo1:17017", "members" : [ { "_id" : 0, "name" : "mongo1:17017", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 6, "optime" : { "t" : 1438007937000, "i" : 1 }, "optimeDate" : ISODate("2015-07-27T14:38:57Z"), "lastHeartbeat" : ISODate("2015-07-27T14:42:53Z"), "pingMs" : 0 }, { "_id" : 1, "name" : "mongo2:27017", "health" : 1, "state" : 1, "stateStr" : "PRIMARY", "optime" : { "t" : 1438007937000, "i" : 1 }, "optimeDate" : ISODate("2015-07-27T14:38:57Z"), "self" : true }, { "_id" : 2, "name" : "mongo3:37017", "health" : 1, "state" : 7, "stateStr" : "ARBITER", "uptime" : 234, "optime" : { "t" : 0, "i" : 0 }, "optimeDate" : ISODate("1970-01-01T00:00:00Z"), "lastHeartbeat" : ISODate("2015-07-27T14:42:53Z"), "pingMs" : 0 } ], "ok" : 1 }

Of course I can turn my SECONDARY into PRIMARY using "rs" command:

root@mongo2:~# mongo --port 27017 MongoDB shell version: 2.0.6 connecting to: 127.0.0.1:27017/test PRIMARY> rs.stepDown(120) Mon Jul 27 14:46:12 DBClientCursor::init call() failed Mon Jul 27 14:46:12 query failed : admin.$cmd { replSetStepDown: 120.0 } to: 127.0.0.1:27017 Mon Jul 27 14:46:12 Error: error doing query: failed shell/collection.js:151 Mon Jul 27 14:46:12 trying reconnect to 127.0.0.1:27017 Mon Jul 27 14:46:12 reconnect 127.0.0.1:27017 ok SECONDARY> rs.status() { "set" : "replication_test", "date" : ISODate("2015-07-27T14:46:30Z"), "myState" : 2, "syncingTo" : "mongo1:17017", "members" : [ { "_id" : 0, "name" : "mongo1:17017", "health" : 1, "state" : 1, "stateStr" : "PRIMARY", "uptime" : 15, "optime" : { "t" : 1438007937000, "i" : 1 }, "optimeDate" : ISODate("2015-07-27T14:38:57Z"), "lastHeartbeat" : ISODate("2015-07-27T14:46:29Z"), "pingMs" : 0 }, { "_id" : 1, "name" : "mongo2:27017", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "optime" : { "t" : 1438007937000, "i" : 1 }, "optimeDate" : ISODate("2015-07-27T14:38:57Z"), "self" : true }, { "_id" : 2, "name" : "mongo3:37017", "health" : 1, "state" : 7, "stateStr" : "ARBITER", "uptime" : 15, "optime" : { "t" : 0, "i" : 0 }, "optimeDate" : ISODate("1970-01-01T00:00:00Z"), "lastHeartbeat" : ISODate("2015-07-27T14:46:29Z"), "pingMs" : 0 } ], "ok" : 1 }

wtorek, 23 grudnia 2014

$5 EBOOK BONANZA – EVERY TITLE, EVERY TOPIC

With all $5 products available in a range of formats and DRM-free, customers will find great value content delivered exactly how they want it across Packt’s website this Christmas and New Year. From Thursday 18th December, every eBook and video will be available on the publisher’s website for just $5 until 6th January. More info : http://bit.ly/1uW4pQG

czwartek, 7 sierpnia 2014

Apache stats for bots

Last time my server got overload due to heavy queries from bots. I needed to know which bot is so malicious. I wrote simple script to parse apache logs to search boot.
#!/usr/bin/perl use File::Basename; use Time::Piece; use Term::ANSIColor qw(:constants); if (-T $plik){ open(PLIK,"$plik")||die "nie mozna otwoprzyc pliku: $plik!!!\n"; } elsif(-B $plik){ open(PLIK,"zcat $plik |")||die "nie mozna otwoprzyc pliku: $plik!!!\n"; } else { print "Pliku: $plik nie mozna otworzyc\n"; exit; } while(defined($log=)){ my ($host,$date,$reqtype,$url,$proto,$status,$size,$referrer,$agent) = $log =~ m/^(\S+) - - \[(\S+ [\-|\+]\d{4})\] "(GET|POST)\s(.+)\sHTTP\/(\d.\d)" (\d{3}) (\d+|-) "(.*?)" "([^"]+)"$/; if ($status eq "200" && $reqtype eq "GET" && $agent =~ m/bot/i){ my $dt = Time::Piece->strptime($date, '%d/%b/%Y:%H:%M:%S %z'); $date= $dt->strftime('%Y-%m-%d'); $slugnumber{$agent}{$date}{$host}++; $bot{$agent}++; } } close(PLIK); foreach $klucz (sort keys %slugnumber){ print "\n================================================\n"; print BOLD,BLUE,"\n $klucz \n",RESET; foreach $data (keys %{ $slugnumber{$klucz} }){ print BOLD,BLUE,"\n $data \n",RESET; foreach $ipek (keys %{ $slugnumber{$klucz}{$data} }){ print "$klucz $data [$ipek] : $slugnumber{$klucz}{$data}{$ipek}\n" } } }
Below is output:
testing> perl ipstats.pl /var/log/apache/access.log ================================================ Yeti/1.1 (Naver Corp.; http://help.naver.com/robots/) 2014-08-05 Yeti/1.1 (Naver Corp.; http://help.naver.com/robots/) 2014-08-05 [125.209.211.199] : 1 2014-08-04 Yeti/1.1 (Naver Corp.; http://help.naver.com/robots/) 2014-08-04 [125.209.211.199] : 1 ================================================ msnbot/2.0b (+http://search.msn.com/msnbot.htm) 2014-08-05 msnbot/2.0b (+http://search.msn.com/msnbot.htm) 2014-08-05 [65.55.213.247] : 10 msnbot/2.0b (+http://search.msn.com/msnbot.htm) 2014-08-05 [65.55.213.243] : 4 msnbot/2.0b (+http://search.msn.com/msnbot.htm) 2014-08-05 [65.55.213.242] : 2

środa, 25 czerwca 2014

Grep - get first and last line

Last time I had to search my logs for certain message. I needed to connect this with user login/logout time.

I needed estimated time of "start" and "end " occurences in logs ( logs which contains huge messages with different time and same message).

I used sed and grep to this:

root@testing:~# for i in `ls /var/log/syslog/syslog*`;do zgrep 'port 1099' $i | sed -n '1p;$p'; done; Jun 25 08:18:01 testing sshd[33286]: error: connect_to x.y.z.c port 1099: failed. Jun 25 11:30:52 testing sshd[45831]: error: connect_to x.y.z.d port 1099: failed. Jun 24 07:55:04 testing sshd[64527]: error: connect_to x.y.z.d port 1099: failed. Jun 24 11:53:13 testing sshd[64527]: error: connect_to x.y.z.c port 1099: failed. Jun 23 08:59:52 testing sshd[34130]: error: connect_to x.y.z.c port 1099: failed. Jun 23 15:28:38 testing sshd[34130]: error: connect_to x.y.z.d port 1099: failed. Jun 20 08:24:51 testing sshd[64526]: error: connect_to x.y.z.c port 1099: failed. Jun 20 10:55:46 testing sshd[7805]: error: connect_to x.y.z.c port 1099: failed.

poniedziałek, 16 czerwca 2014

Get all files from remote directory using wget

wget -A pdf,jpg -m -p -E -k -K -np http://site/path/