Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2016-08-04

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:17 shyam joined #gluster
00:33 johnmilton joined #gluster
00:37 BitByteNybble110 joined #gluster
00:40 Pupeno joined #gluster
01:08 masuberu joined #gluster
01:24 harish_ joined #gluster
01:35 derjohn_mobi joined #gluster
01:47 ilbot3 joined #gluster
01:47 Topic for #gluster is now Gluster Community - http://gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
01:52 newdave joined #gluster
02:01 masuberu joined #gluster
02:03 Alghost_ joined #gluster
02:16 shdeng joined #gluster
02:25 plarsen joined #gluster
02:29 prasanth joined #gluster
02:35 _Bryan_ joined #gluster
02:36 _Bryan_ Anyone aroudn that can help me with a self heal?
02:36 _Bryan_ or more specifically with a split brain
02:36 _Bryan_ with "gluster volume heal vol-name info split-brain" just reporting alot of /
02:39 hchiramm joined #gluster
02:53 Alghost joined #gluster
02:55 Alghost_ joined #gluster
03:14 magrawal joined #gluster
03:20 sanoj joined #gluster
03:25 sanoj joined #gluster
03:51 nbalacha joined #gluster
03:52 elico joined #gluster
03:54 atinm joined #gluster
04:02 ppai joined #gluster
04:04 itisravi joined #gluster
04:10 ira_ joined #gluster
04:13 nishanth joined #gluster
04:16 julim joined #gluster
04:20 poornimag joined #gluster
04:26 nbalacha joined #gluster
04:27 shubhendu joined #gluster
04:32 somlin22 joined #gluster
04:37 aspandey joined #gluster
04:41 itisravi joined #gluster
04:45 nehar joined #gluster
04:48 rafi joined #gluster
05:02 karthik_ joined #gluster
05:09 jiffin joined #gluster
05:12 Muthu joined #gluster
05:16 ndarshan joined #gluster
05:22 aravindavk joined #gluster
05:29 raghug joined #gluster
05:30 elico joined #gluster
05:35 itisravi joined #gluster
05:35 nishanth joined #gluster
05:38 bwerthmann joined #gluster
05:38 satya4ever joined #gluster
05:41 skoduri joined #gluster
05:41 RameshN joined #gluster
05:43 ppai joined #gluster
05:45 karnan joined #gluster
05:45 hgowtham joined #gluster
05:46 karthik_ joined #gluster
05:50 rafi1 joined #gluster
05:51 ndarshan joined #gluster
05:51 shubhendu joined #gluster
05:52 Muthu joined #gluster
05:53 Manikandan joined #gluster
06:09 [diablo] joined #gluster
06:09 Philambdo joined #gluster
06:10 jtux joined #gluster
06:16 rwheeler joined #gluster
06:19 kshlm joined #gluster
06:31 aspandey joined #gluster
06:42 kdhananjay joined #gluster
06:55 devyani7_ joined #gluster
06:59 level7 joined #gluster
07:02 Apeksha joined #gluster
07:04 Gnomethrower joined #gluster
07:17 om joined #gluster
07:25 Pupeno joined #gluster
07:31 ndarshan joined #gluster
07:32 Gnomethrower joined #gluster
07:36 Siavash__ joined #gluster
07:37 masuberu joined #gluster
07:40 nohitall is there any point in using more thna 2 bricks on 2 nodes? for the same volume
07:44 rafi1 joined #gluster
07:45 itisravi_ joined #gluster
07:50 ndarshan joined #gluster
07:52 hackman joined #gluster
07:53 deniszh joined #gluster
07:53 fsimonce joined #gluster
07:57 ppai joined #gluster
08:01 msvbhat joined #gluster
08:03 _nixpanic joined #gluster
08:03 _nixpanic joined #gluster
08:05 hchiramm joined #gluster
08:07 Slashman joined #gluster
08:14 shubhendu joined #gluster
08:15 nishanth joined #gluster
08:16 derjohn_mobi joined #gluster
08:20 somlin22 joined #gluster
08:20 jiffin nohitall: for availability perspective, IMO answer is no
08:22 satya4ever_ joined #gluster
08:24 masuberu joined #gluster
08:27 level7 joined #gluster
08:27 karnan joined #gluster
08:34 ramky joined #gluster
08:34 nohitall jiffin: Do you know in 2 node 2 replica setup, if I have a host failure, with default settings, will the surviving node go in read only mode?
08:34 nohitall I assume yes, havent tested yet
08:38 jiffin if u server quorum, then it will come
08:38 level7_ joined #gluster
08:38 level7 joined #gluster
08:38 jiffin if set server quorum, then i will become read only other wise not
08:39 atalur joined #gluster
08:43 itisravi joined #gluster
08:46 somlin22 joined #gluster
08:52 deniszh joined #gluster
08:55 ahino joined #gluster
08:59 karthik_ joined #gluster
08:59 ashiq joined #gluster
09:05 Manikandan joined #gluster
09:12 deniszh1 joined #gluster
09:21 armyriad joined #gluster
09:22 arcolife joined #gluster
09:25 pur joined #gluster
09:32 jiffin1 joined #gluster
09:41 masuberu joined #gluster
09:53 Wizek joined #gluster
09:56 aspandey joined #gluster
09:58 atalur joined #gluster
09:59 karnan joined #gluster
10:00 jwd joined #gluster
10:05 jiffin joined #gluster
10:15 masuberu joined #gluster
10:16 jiffin joined #gluster
10:23 somlin22 joined #gluster
10:25 ahino joined #gluster
10:25 msvbhat joined #gluster
10:39 Wizek joined #gluster
10:46 ahino joined #gluster
10:47 wadeholler joined #gluster
10:58 aspandey_ joined #gluster
11:00 Philambdo joined #gluster
11:04 Pupeno joined #gluster
11:07 msvbhat joined #gluster
11:10 level7 joined #gluster
11:11 Philambdo joined #gluster
11:17 Mmike joined #gluster
11:17 Mmike joined #gluster
11:18 jtux joined #gluster
11:38 armyriad joined #gluster
11:39 armyriad joined #gluster
11:41 johnmilton joined #gluster
11:41 B21956 joined #gluster
11:47 Manikandan joined #gluster
11:57 kotreshhr joined #gluster
12:02 johnmilton joined #gluster
12:04 Alghost joined #gluster
12:04 Pupeno joined #gluster
12:07 armyriad joined #gluster
12:07 somlin22 joined #gluster
12:16 ira_ joined #gluster
12:23 samppah http://download.gluster.org/pub/gl​uster/glusterfs/nfs-ganesha/2.3.2/ seems to be missing EPEL repositories.. is there nfs-ganesha-gluster packages available somewhere else?
12:23 glusterbot Title: Index of /pub/gluster/glusterfs/nfs-ganesha/2.3.2 (at download.gluster.org)
12:23 somlin22 joined #gluster
12:26 jiffin samppah: u can get it from centos storage sig
12:29 jiffin samppah: https://cbs.centos.org
12:29 glusterbot Title: Build System Info | c1bf.rdu2.centos.org build server (at cbs.centos.org)
12:29 ndevos samppah: on CentOS: yum install centos-release-gluster ; yum --enablerepo=centos-gluster38-test install nfs-ganesha-gluster
12:30 ndevos samppah: until now, nobody reported test results of the nfs-ganesha packages in the centos-gluster38-test repo, otherwise they would be on the CentOS mirrors already
12:31 Alghost_ joined #gluster
12:31 samppah thank you jiffin and ndevos, seems that version 2.3.2 are also available in centos-gluster37-test repo also
12:32 samppah ndevos: okay, thank you for the clarification :)
12:34 ndevos samppah: yeah, the gluster37 version has those rpms too, and they have been tested by others already :)
12:35 level7_ joined #gluster
12:36 Slashman joined #gluster
12:41 nbalacha joined #gluster
12:44 baojg_ joined #gluster
12:44 hackman joined #gluster
12:49 Pupeno joined #gluster
12:51 karnan joined #gluster
12:51 s-hell left #gluster
12:52 karnan joined #gluster
12:58 Pupeno_ joined #gluster
13:00 Pupeno joined #gluster
13:08 javiM joined #gluster
13:13 Pupeno joined #gluster
13:13 Pupeno joined #gluster
13:16 Pupeno joined #gluster
13:17 mchangir joined #gluster
13:18 Pupeno joined #gluster
13:20 Philambdo joined #gluster
13:20 derjohn_mobi joined #gluster
13:31 julim joined #gluster
13:34 nohitall if I test a node detach from 2 node volume, and ran gluster volume status on node1 it hangs forever, is that normal?
13:35 Philambdo1 joined #gluster
13:39 dnunez joined #gluster
13:43 Pupeno joined #gluster
13:43 Pupeno joined #gluster
13:49 level7 joined #gluster
13:58 Siavash__ joined #gluster
14:00 shyam joined #gluster
14:01 dnunez joined #gluster
14:06 skoduri joined #gluster
14:16 jiffin joined #gluster
14:21 hagarth joined #gluster
14:21 bowhunter joined #gluster
14:28 Wizek joined #gluster
14:46 dlambrig_ joined #gluster
14:46 msvbhat joined #gluster
14:51 armyriad joined #gluster
14:53 Siavash__ joined #gluster
14:53 BitByteNybble110 joined #gluster
14:53 newdave joined #gluster
14:54 nohitall_ joined #gluster
14:54 nohitall left #gluster
14:54 derjohn_mobi joined #gluster
14:55 nohitall_ left #gluster
14:55 nohitall_ joined #gluster
14:55 nohitall_ hiho
14:57 BitByteNybble110 joined #gluster
14:58 nohitall_ I got a question, i have a 2 node setup, with replica 2 volume, on a simuluaed node failure (plugging eth cable) the mount on node1 still works BUT it hangs for a few seconds, also running "gluster volume status" does hangs forever as long as node2 is off. Is that normal?
14:58 Pupeno joined #gluster
14:59 BitByteNybble110 joined #gluster
15:01 hagarth joined #gluster
15:04 rafi joined #gluster
15:04 Pupeno joined #gluster
15:05 BitByteNybble110 joined #gluster
15:06 Pupeno joined #gluster
15:12 elico joined #gluster
15:17 wushudoin joined #gluster
15:21 masuberu joined #gluster
15:22 harish_ joined #gluster
15:23 devyani7__ joined #gluster
15:31 shyam1 joined #gluster
15:33 rafi joined #gluster
15:34 Gnomethrower joined #gluster
15:45 Siavash__ joined #gluster
15:54 rafi joined #gluster
16:07 prasanth joined #gluster
16:09 aravindavk joined #gluster
16:11 gluster-newb joined #gluster
16:11 gluster-newb So we have a 3-node replicated cluster. Our gluster client (mounted via glusterfs) got disconnected from a brick because of a 42 second ping-timeout (probably caused because of heavy IO load).  How long does it take for the client to connect back to the brick and is that configurable?
16:13 malevolent joined #gluster
16:13 xavih joined #gluster
16:14 masuberu joined #gluster
16:16 xavih_ joined #gluster
16:20 JoeJulian @ping-timeout
16:20 glusterbot JoeJulian: The reason for the long (42 second) ping-timeout is because re-establishing fd's and locks can be a very expensive operation. With an average MTBF of 45000 hours for a server, even just a replica 2 would result in a 42 second MTTR every 2.6 years, or 6 nines of uptime.
16:20 mchangir joined #gluster
16:20 JoeJulian nohitall_: ^
16:21 JoeJulian gluster-newb: iirc, the client retries every 3 seconds.
16:21 nohitall_ but that would break all my VMs if I had glusterfs as a storage for VM disks
16:22 nohitall_ is there a way to smooth it out? node failure shouldnt not affect the storage at all otherwise
16:23 JoeJulian Do your "nodes" fail that frequently?
16:23 nehar joined #gluster
16:23 kpease joined #gluster
16:24 nohitall_ JoeJulian: thats not the point, th eissue is that freeze would cause all VMs to die/cause issues, it basically can never happen
16:24 nohitall_ so it happens even with 3+ nodes?
16:25 nohitall_ I was looking to use glusterfs on 2+ nodes for storage for VM disks for xenserver pool
16:25 JoeJulian If you distribute more, any volumes that are not on the replica pair that have failed will not be affected. Build out horizontally to about a 5000 replica and you, statistically, shouldn't see a failure for any single VM for a lifetime.
16:26 nohitall_ so you saying all 2/3nodes volumes on node failure experience this 42sec ping timeout?
16:26 nohitall_ how do people deal with that?
16:26 JoeJulian All clients that are connected to a TCP connection that stops responding will wait for it to respond for up to ping-timeout.
16:27 JoeJulian We handle it by writing SLA's that are reasonable.
16:27 JoeJulian Actually, that's a lie.
16:27 nohitall_ ok so what should I change to use glusterfs for VM disk storage? what would an ideal setup look like then?
16:28 nohitall_ I assumed people use it for that purpose, maybe I am wrong?!
16:28 JoeJulian We at IO write 100% SLAs (which is B.S. imho) and just pay out if we have an issue.
16:29 JoeJulian An ideal system would exist in a quantum matrix that uses no power and produces no heat and uses entanglement to instantaneously provide the data anywhere in the universe instantly.
16:30 JoeJulian And it would come with a Unicorn.
16:30 nohitall_ hehe :)
16:30 nohitall_ hey I am just asking because I really dont know and I know glusterfs is widely used
16:30 nohitall_ and setting up lustrefs is probably more difficult
16:30 JoeJulian But seriously, just design on statistical probability. You can *never* eliminate downtime.
16:31 nohitall_ well I actuallt thought from the start that nothing would happen if a node fails, but I guess I was deadwrong :D
16:31 nohitall_ I assumed it just continues having access on node1 and if node2 rejoins at some point glusterfs will resync it
16:32 JoeJulian If you *stop* a server, nothing will happen.
16:33 nohitall_ well I just unplugged the cable to simulate it
16:33 nohitall_ because thats what would happen in reallife if hardware fails
16:33 nohitall_ just stops responding
16:37 nohitall_ JoeJulian: well thanks for the info
16:39 ben453 joined #gluster
16:41 masuberu joined #gluster
16:49 rafi joined #gluster
16:58 ahino joined #gluster
16:59 shyam joined #gluster
17:10 hagarth joined #gluster
17:12 aravindavk joined #gluster
17:17 elico joined #gluster
17:31 shubhendu joined #gluster
17:51 om joined #gluster
17:53 BitByteNybble110 joined #gluster
18:01 bwerthmann joined #gluster
18:01 shubhendu joined #gluster
18:05 julim joined #gluster
18:11 bb joined #gluster
18:13 bb_ joined #gluster
18:13 bb_ Hello
18:13 glusterbot bb_: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
18:20 bb_ Im very new to gluster and am running into the following issue:  I currently have the following volume setup running over an infiniband network  Volume Name: data-storage Type: Stripe Volume ID: 5502118b-394a-4fbe-af51-aca9c64d9eef Status: Started Number of Bricks: 1 x 12 = 12 Transport-type: rdma Bricks: Brick1: 192.168.0.90:/data/brick1/data Brick2: 192.168.0.90:/data/brick2/data Brick3: 192.168.0.90:/data/brick3/data Brick4: 192.1
18:20 hagarth joined #gluster
18:21 bb_ OI that didnt work as planned -  Basically my issue is that im getting transport disconnects when dding two files from two shells on the same client: Shell1: [root@storage4 glusterfs]# dd if=/dev/zero of=sb-io-test-1 bs=1M count=10k conv=fdatasync dd: error writing ‘sb-io-test-1’: Transport endpoint is not connected dd: closing output file ‘sb-io-theest-1’: Transport endpoint is not connected
18:21 bb_ Im wondering if anyone is familiar with this issue and whether there is some sort of settings tweak that im missing that would correct this issue
18:22 bb_ If I run that same dd just by itself, not two at a time I am successful, just not when its running two at a time
18:25 Gnomethrower joined #gluster
18:31 chirino joined #gluster
18:32 Wojtek joined #gluster
18:39 robb_nl joined #gluster
18:45 deniszh joined #gluster
19:01 derjohn_mobi joined #gluster
19:03 hagarth joined #gluster
19:05 ghenry joined #gluster
19:05 ghenry joined #gluster
19:14 xavih joined #gluster
19:14 malevolent joined #gluster
19:24 om joined #gluster
19:26 faceman joined #gluster
19:34 amye left #gluster
19:34 armyriad joined #gluster
19:54 JoeJulian ~stripe | bb_
19:54 glusterbot bb_: Please see http://joejulian.name/blog/sho​uld-i-use-stripe-on-glusterfs/ about stripe volumes.
19:55 bb_ Thank you for the link. Ill give that a read
19:58 JoeJulian I suspect the problem you're seeing in inherent in the design of stripe. If it's not, however, I would look at storage schedulers, memory use, cpu utilization...
20:00 gluster-newb So when I got the 42 second ping timeout disconnect, it looks like it never reconnected. Is there a maximum number of retries before it gives up completely?
20:00 armyriad joined #gluster
20:00 JoeJulian Nope. The only time I've heard of that failing to reconnect is either because the server is dead or not responding, or a known bug with ssl connections.
20:01 plarsen joined #gluster
20:06 bb_ I would have to say youre probbably right....While one DD at once does crank through the memory they fail almost immediately when two get started at once and it doesnt have time to eat through the memory or each spike the cpu.
20:10 bb_ There is there a recommended storage scheduler for use with glusterfs?
20:12 om joined #gluster
20:21 faceman Is there a way to set up a peer node which accesses and updates other peers but cannot be directly accessed (e.g. behind a firewall)?
20:22 faceman And pulls updates from the other peers?
20:24 squizzi joined #gluster
20:34 armyriad joined #gluster
20:51 deniszh joined #gluster
20:53 om joined #gluster
20:54 gluster-newb Looks like I'm getting some "Reply submission failed" errors in my brick logs.  Did anything come out of this conversation?  https://irclog.perlgeek.de/gluster-dev/2013-05-06
20:54 glusterbot Title: IRC log for #gluster-dev, 2013-05-06 (at irclog.perlgeek.de)
21:06 gluster-newb joined #gluster
21:18 julim joined #gluster
21:20 dlambrig_ joined #gluster
21:26 Gambit15 joined #gluster
21:29 hackman joined #gluster
21:31 Alghost joined #gluster
21:31 JoeJulian faceman: no. In fact, the clients connect directly to all the storage servers that are part of the volume so if your clients cannot reach the host:port of a brick, your volume will be broken.
21:35 wadeholler joined #gluster
21:40 elico joined #gluster
21:44 JoeJulian faceman: You can solve that with ipsec or vxlans.
21:45 om joined #gluster
22:23 om joined #gluster
22:26 plarsen joined #gluster
22:28 jocke- joined #gluster
22:43 shyam joined #gluster
23:14 plarsen joined #gluster
23:15 nathwill joined #gluster
23:21 d0nn1e joined #gluster
23:23 fcoelho joined #gluster
23:25 om joined #gluster
23:51 plarsen joined #gluster
23:55 shyam left #gluster
23:57 masuberu joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary