Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2017-08-17

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:00 dijuremo joined #gluster
00:40 jbrooks joined #gluster
01:31 h4rry joined #gluster
01:34 jbrooks joined #gluster
01:35 vbellur joined #gluster
01:52 ilbot3 joined #gluster
01:52 Topic for #gluster is now Gluster Community - https://www.gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - https://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
01:57 baojg joined #gluster
02:02 omie888777 joined #gluster
02:10 kpease joined #gluster
02:45 dominicpg joined #gluster
02:54 skoduri joined #gluster
03:10 bwerthmann joined #gluster
03:38 atinmu joined #gluster
03:40 riyas joined #gluster
03:48 itisravi joined #gluster
03:50 mlhess joined #gluster
03:54 mlhess joined #gluster
03:58 nbalacha joined #gluster
03:58 kramdoss_ joined #gluster
03:59 baojg joined #gluster
04:09 dijuremo joined #gluster
04:09 gyadav joined #gluster
04:12 dominicpg joined #gluster
04:20 masber joined #gluster
04:24 poornima joined #gluster
04:25 Shu6h3ndu joined #gluster
04:30 jbrooks joined #gluster
04:42 jiffin joined #gluster
04:47 ilbot3 joined #gluster
04:47 Topic for #gluster is now Gluster Community - https://www.gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - https://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
04:47 delhage joined #gluster
04:47 _KaszpiR_ joined #gluster
04:47 kenansulayman joined #gluster
04:47 xiu_ joined #gluster
04:47 JonathanD joined #gluster
04:47 delhage joined #gluster
04:47 csaba1 joined #gluster
04:47 semiosis joined #gluster
04:48 lkoranda joined #gluster
04:48 foster_ joined #gluster
04:48 mlg9000 joined #gluster
04:48 Urania joined #gluster
04:48 introspectr joined #gluster
04:48 shruti joined #gluster
04:48 crag joined #gluster
04:48 tom[] joined #gluster
04:49 zerick joined #gluster
04:49 Igel joined #gluster
04:49 tamalsaha[m] joined #gluster
04:49 rwheeler joined #gluster
04:49 mk-fg joined #gluster
04:49 nadley joined #gluster
04:50 mlhess joined #gluster
04:50 pdrakeweb joined #gluster
04:50 yoavz joined #gluster
04:50 mk-fg joined #gluster
04:50 JoeJulian joined #gluster
04:50 skoduri joined #gluster
04:50 marlinc joined #gluster
04:50 yosafbridge joined #gluster
04:50 Klas joined #gluster
04:51 [o__o] joined #gluster
04:51 cholcombe joined #gluster
04:51 dataio joined #gluster
04:51 valkyr3e joined #gluster
04:51 varesa joined #gluster
04:55 raginbajin joined #gluster
04:58 devyani7 joined #gluster
05:02 ndarshan joined #gluster
05:03 major joined #gluster
05:04 krk joined #gluster
05:05 prasanth|pto joined #gluster
05:12 kdhananjay joined #gluster
05:13 ankitr joined #gluster
05:18 masuberu joined #gluster
05:19 smohan[m] joined #gluster
05:19 georgeangel[m] joined #gluster
05:22 apandey joined #gluster
05:25 masber joined #gluster
05:25 karthik_us joined #gluster
05:30 susant joined #gluster
05:30 ic0n joined #gluster
05:33 sahina joined #gluster
05:33 apandey_ joined #gluster
05:34 apandey_ joined #gluster
05:35 susant joined #gluster
05:36 ankitr joined #gluster
05:38 jiffin joined #gluster
05:42 buvanesh_kumar joined #gluster
05:45 Bardack joined #gluster
05:47 ppai joined #gluster
05:48 sona joined #gluster
05:51 XpineX joined #gluster
05:53 Humble joined #gluster
05:53 skoduri joined #gluster
05:57 hgowtham joined #gluster
06:03 rafi1 joined #gluster
06:06 kotreshhr joined #gluster
06:10 Saravanakmr joined #gluster
06:10 msvbhat joined #gluster
06:13 sanoj joined #gluster
06:21 riyas joined #gluster
06:25 saltsa joined #gluster
06:26 ashiq joined #gluster
06:28 purpleidea joined #gluster
06:28 purpleidea joined #gluster
06:30 gyadav joined #gluster
06:31 rafi joined #gluster
06:31 gyadav left #gluster
06:31 gyadav joined #gluster
06:42 bEsTiAn joined #gluster
06:42 nigelb joined #gluster
06:49 armyriad joined #gluster
06:52 riyas joined #gluster
06:59 itisravi joined #gluster
07:01 baojg joined #gluster
07:10 apandey__ joined #gluster
07:24 weller joined #gluster
07:34 h4rry joined #gluster
07:37 misc joined #gluster
07:42 rastar joined #gluster
08:03 ilbot3 joined #gluster
08:03 Topic for #gluster is now Gluster Community - https://www.gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - https://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
08:03 rafi joined #gluster
08:04 [diablo] joined #gluster
08:08 FuzzyVeg joined #gluster
08:09 legreffier joined #gluster
08:11 mbukatov joined #gluster
08:11 aravindavk joined #gluster
08:16 msvbhat joined #gluster
08:30 atinmu joined #gluster
08:33 owlbot joined #gluster
08:41 ndarshan joined #gluster
08:45 msvbhat joined #gluster
08:50 gospod2 joined #gluster
08:53 sahina joined #gluster
08:57 itisravi__ joined #gluster
08:57 apandey_ joined #gluster
09:02 itisravi joined #gluster
09:10 msvbhat joined #gluster
09:13 _KaszpiR_ joined #gluster
09:13 karthik_us joined #gluster
09:14 riyas joined #gluster
09:21 sahina joined #gluster
09:27 social joined #gluster
09:36 apandey__ joined #gluster
09:40 Kassandry joined #gluster
09:49 msvbhat joined #gluster
09:58 karthik_us joined #gluster
10:00 gyadav_ joined #gluster
10:00 sanoj joined #gluster
10:02 _ndevos joined #gluster
10:02 _ndevos joined #gluster
10:06 cloph joined #gluster
10:08 msvbhat joined #gluster
10:12 susant joined #gluster
10:14 n-st joined #gluster
10:14 Gugge joined #gluster
10:21 gyadav joined #gluster
10:21 rastar joined #gluster
10:22 DJClean joined #gluster
10:47 atinmu joined #gluster
10:51 buvanesh_kumar joined #gluster
10:56 devyani7 joined #gluster
11:01 _KaszpiR_ joined #gluster
11:02 ppai joined #gluster
11:06 Wizek_ joined #gluster
11:21 rastar joined #gluster
11:29 WebertRLZ joined #gluster
11:34 shyam joined #gluster
11:34 Teraii joined #gluster
11:36 ilbot3 joined #gluster
11:36 Topic for #gluster is now Gluster Community - https://www.gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - https://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
11:38 baber joined #gluster
11:58 ilbot3 joined #gluster
11:58 Topic for #gluster is now Gluster Community - https://www.gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - https://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
12:07 skoduri joined #gluster
12:14 Saravanakmr joined #gluster
12:24 atinmu joined #gluster
12:39 Saravanakmr joined #gluster
12:46 nh2 joined #gluster
12:52 karthik_us joined #gluster
12:55 baojg joined #gluster
12:56 X-ian joined #gluster
12:56 X-ian ji.
12:58 X-ian problem: my bricks are killed (mostly) every day around 7:05 am
13:00 cloph oom-killer due to some bogus crawling "system check" process?
13:03 X-ian auditd output suggests that there is a signal 15 delivered from glusterfsd . cron.daily is done at around 6:50
13:04 cloph that is regular terminate signal, so it is told to shutdown...
13:05 X-ian checked syslog: no oom message
13:06 X-ian why would that signal be sent?
13:07 bluenemo joined #gluster
13:08 susant joined #gluster
13:11 nbalacha joined #gluster
13:11 cloph oom out-of-memory - that is not a specific signal, but reason why a kernel would kill of other processes, to free RAM.
13:11 cloph if that's done there's typically a listing of possible candidates and their memory usage in the log
13:13 ic0n joined #gluster
13:13 X-ian iirc this is some event that would be logged through syslg
13:13 X-ian /syslg/syslog
13:14 cloph yes, OOM kills will be logged (at least on all systems I used so far)
13:16 X-ian but here are no loggend oom events
13:18 hosom joined #gluster
13:20 hosom good morning! the documentation doesn't make it clear: is lvm a requirement for gluster bricks?
13:20 sahina joined #gluster
13:21 shyam joined #gluster
13:22 anoopcs hosom, If you are planning to have LVM based snapshots and further management via gluster...yes it is required
13:22 hosom is there a place to view lvm specific features? I understand that it is required for lvm-snapshotting and a couple other things
13:23 hosom I'm not inherently opposed to lvm, it's just that if we aren't making use of any of the features it enables, we might have a discussion about building without that layer
13:27 X-ian hosom: we've been using lvm for over 10 years now. it really pays off
13:29 humblec joined #gluster
13:31 cloph hosom: so answer is "no", you don't need lvm for gluster.
13:32 cloph you can put gluster bricks on any filesystem that supports the extended attributes.
13:32 cloph Whether you put that filesystem on a lvm backed volume or plain partition or raid is up to you
13:33 cloph I tried lvm raid/stripes but that didn't work out too well, so if using lvm, then better use additional/dedicated raid nevertheless
13:34 X-ian cloph: what happend?
13:34 skylar joined #gluster
13:34 cloph performance sucked - didn't really investigate what exactly was to blame
13:35 X-ian ic
13:35 hosom We have a use case where we will be putting hundreds of TB together into single volumes to serve cold storage for splunk, our storage nodes come fully populated, so scaling within the node isn't going to be a concept for us and snapshotting is unlikely to occur due to the sheer size of the diffs during times that it would be beneficial
13:36 hosom our nodes use raid 10 for the bricks, so we're not terribly worried about data loss or RAID causing performance drops
13:36 buvanesh_kumar joined #gluster
13:36 h4rry joined #gluster
13:37 hosom the desire to eliminate lvm from the equation is just to get rid of one more moving part , not some blind bigotry against it or anything... I just want to make sure that I'm not going to regret that decision later
13:38 X-ian hosom: ok. that's out of my league. biggest system here has 15TB :-)
13:38 hosom haha... all said and done, we're looking at starting this at 2 240TB volumes and scaling out to about a PB in the next 5 years
13:40 kotreshhr left #gluster
13:55 dominicpg joined #gluster
13:55 d4n13L joined #gluster
14:04 shyam joined #gluster
14:25 humblec joined #gluster
14:25 Humble joined #gluster
14:37 Humble joined #gluster
14:43 m0zes joined #gluster
14:44 m0zes joined #gluster
14:50 farhorizon joined #gluster
15:06 shyam joined #gluster
15:09 wushudoin joined #gluster
15:09 wushudoin joined #gluster
15:13 shyam joined #gluster
15:14 msvbhat joined #gluster
15:16 h4rry joined #gluster
15:21 mb_ joined #gluster
15:23 Ramereth joined #gluster
15:23 aravindavk joined #gluster
15:24 vbellur joined #gluster
15:25 edong23 joined #gluster
15:25 vbellur joined #gluster
15:28 vbellur joined #gluster
15:28 vbellur1 joined #gluster
15:29 vbellur joined #gluster
15:31 logan- joined #gluster
15:35 jiffin joined #gluster
15:37 Guest9038 joined #gluster
15:40 logan- joined #gluster
15:43 Somedream joined #gluster
15:46 vbellur joined #gluster
15:50 vbellur1 joined #gluster
16:00 Gambit15 joined #gluster
16:07 vbellur joined #gluster
16:12 aravindavk joined #gluster
16:15 rastar joined #gluster
16:23 susant joined #gluster
16:25 shyam joined #gluster
16:29 Saravanakmr joined #gluster
16:35 shyam joined #gluster
16:44 jiffin1 joined #gluster
17:09 sona joined #gluster
17:27 ic0n joined #gluster
17:27 hosom after rebooting a node, it shows as Disconnected in peer status, I'm assuming that I'm doing something wrong here
17:27 hosom ?
17:29 farhorizon joined #gluster
17:30 cloph forgot to start the service? Forgot to make firewall rules persistent? Forgot to make routing rules persistent? impossible to tell with just that bit of info :-)
17:33 hosom service starts, says it is in a good status, firewall is configured to allow all ports/protocols from peer IP addresses
17:34 h4rry joined #gluster
17:34 hosom peers are layer 2 adjacent
17:35 hosom running a volume start force gets the bricks back into the volume, but the peer status still returns 'disconnected' for that node on every other node
17:35 hosom if you run peer status on the node that was restarted? it says everything is dandy
17:36 ic0n joined #gluster
17:39 hosom ips are statically configured, so there shouldn't be anything goofy going on like the service starting before networking is up
17:56 jiffin joined #gluster
18:04 h4rry joined #gluster
18:08 jiffin joined #gluster
18:15 baber joined #gluster
18:22 gospod2 joined #gluster
18:23 jiffin joined #gluster
18:28 farhorizon joined #gluster
18:30 jiffin joined #gluster
18:31 Bardack_ joined #gluster
18:59 ashiq joined #gluster
19:01 farhorizon joined #gluster
19:16 farhorizon joined #gluster
19:19 mb_ joined #gluster
19:31 _KaszpiR_ joined #gluster
19:40 cliluw joined #gluster
19:41 alvinstarr joined #gluster
19:46 pioto joined #gluster
20:11 jackhill joined #gluster
20:18 baber joined #gluster
20:25 farhoriz_ joined #gluster
20:38 shyam joined #gluster
20:40 Humble joined #gluster
21:02 TBlaar2 joined #gluster
21:03 farhorizon joined #gluster
21:12 Jacob843 joined #gluster
21:20 ThHirsch joined #gluster
21:22 jbrooks joined #gluster
21:25 ic0n joined #gluster
21:36 vbellur joined #gluster
21:44 freephile joined #gluster
21:44 plarsen joined #gluster
21:47 vbellur1 joined #gluster
21:55 shyam joined #gluster
22:02 shyam joined #gluster
22:12 ic0n joined #gluster
22:13 jkroon joined #gluster
22:14 unixfg joined #gluster
22:29 decay joined #gluster
22:30 mb_ joined #gluster
22:38 ic0n joined #gluster
22:50 ws2k3 joined #gluster
22:57 freephile Anyone know why / how to fix *failed ack*:
22:57 freephile fatal: [10.0.50.68]: FAILED! => {"changed": false, "failed": true, "msg": "error running gluster (/sbin/gluster peer probe 10.0.50.86) command (rc=1): peer probe: failed: Failed to get handshake ack from remote server\n"}
23:00 freephile glusterfs 3.10.5
23:00 freephile simple two node setup using ansible to deploy
23:01 fassl joined #gluster
23:01 freephile On Centos7. I've stopped firewalld.service
23:08 vbellur joined #gluster
23:23 JGS joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary