Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2017-01-05

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:25 Klas joined #gluster
00:53 phileas joined #gluster
01:05 shdeng joined #gluster
01:26 plarsen joined #gluster
01:44 shyam joined #gluster
01:45 cholcombe joined #gluster
01:50 riyas joined #gluster
01:53 atinm_ joined #gluster
02:06 MadPsy joined #gluster
02:06 MadPsy joined #gluster
02:11 jdossey joined #gluster
02:13 MadPsy joined #gluster
02:13 MadPsy joined #gluster
02:16 bwerthmann joined #gluster
02:46 social joined #gluster
02:48 annettec joined #gluster
02:56 derjohn_mob joined #gluster
03:05 frakt joined #gluster
03:05 Chinorro_ joined #gluster
03:05 Marbug joined #gluster
03:05 chris4 joined #gluster
03:05 Nebraskka joined #gluster
03:05 pocketprotector joined #gluster
03:05 k0nsl joined #gluster
03:05 sloop joined #gluster
03:05 pocketprotector joined #gluster
03:05 Iouns joined #gluster
03:05 DJClean joined #gluster
03:05 sloop joined #gluster
03:06 freepe joined #gluster
03:06 overclk joined #gluster
03:07 k0nsl joined #gluster
03:14 kramdoss_ joined #gluster
03:18 steveeJ joined #gluster
03:43 magrawal joined #gluster
03:46 nbalacha joined #gluster
03:49 atinm_ joined #gluster
04:08 phileas joined #gluster
04:09 nishanth joined #gluster
04:18 gyadav joined #gluster
04:19 itisravi joined #gluster
04:21 buvanesh_kumar joined #gluster
04:38 gyadav joined #gluster
04:39 gyadav joined #gluster
04:47 Shu6h3ndu joined #gluster
04:57 gem joined #gluster
04:57 jiffin joined #gluster
05:10 ppai joined #gluster
05:11 karthik_us joined #gluster
05:18 riyas joined #gluster
05:19 Prasad joined #gluster
05:21 ndarshan joined #gluster
05:24 prasanth joined #gluster
05:26 Saravanakmr joined #gluster
05:27 bwerthmann joined #gluster
05:32 jiffin1 joined #gluster
05:32 aravindavk joined #gluster
05:34 jiffin joined #gluster
05:35 jiffin1 joined #gluster
05:35 skoduri joined #gluster
05:46 RameshN joined #gluster
05:46 Karan joined #gluster
05:52 nbalacha joined #gluster
05:55 hgowtham joined #gluster
05:59 sbulage joined #gluster
06:05 jkroon joined #gluster
06:05 ankitraj joined #gluster
06:06 nbalacha joined #gluster
06:08 nbalacha joined #gluster
06:09 ju5t joined #gluster
06:11 ankitraj joined #gluster
06:14 susant joined #gluster
06:19 k4n0 joined #gluster
06:26 sanoj joined #gluster
06:34 karthik_us joined #gluster
06:40 shyam joined #gluster
06:42 nishanth joined #gluster
06:49 Wizek_ joined #gluster
06:50 XpineX joined #gluster
06:53 victori joined #gluster
06:55 nbalacha joined #gluster
07:00 asriram joined #gluster
07:13 mhulsman joined #gluster
07:17 ashiq joined #gluster
07:18 mhulsman joined #gluster
07:26 jtux joined #gluster
07:26 victori joined #gluster
07:36 pkalever joined #gluster
07:36 karthik_us joined #gluster
07:43 Lee1092 joined #gluster
07:44 mhulsman joined #gluster
07:51 rastar joined #gluster
08:05 victori joined #gluster
08:14 mhulsman1 joined #gluster
08:24 victori joined #gluster
08:26 fsimonce joined #gluster
08:27 mhulsman joined #gluster
08:28 pulli joined #gluster
08:28 d0nn1e joined #gluster
08:29 jtux joined #gluster
08:35 orogor hello
08:35 glusterbot orogor: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
08:35 orogor can i get some slightly async writes or write caches in a  replicated setup ?
08:36 orogor i get 50MB writes in replicated 500MB writes in distributed, with a  1GB link
08:37 orogor prod will get 10GB link , but also faster writes
08:39 orogor also i tried ditributed and i see everything 3 times ( the test is 3 nodes distributed )
08:41 orogor abyss^, from what i ve read, it  s worth to try to get 3.9
08:44 sanoj joined #gluster
08:48 kotreshhr joined #gluster
08:52 shyam joined #gluster
08:53 nishanth joined #gluster
09:05 XpineX joined #gluster
09:08 sanoj joined #gluster
09:15 Slashman joined #gluster
09:22 level7 joined #gluster
09:25 flying joined #gluster
09:31 alezzandro joined #gluster
09:32 kdhananjay joined #gluster
09:34 Wizek_ joined #gluster
09:51 derjohn_mob joined #gluster
09:58 karthik_us joined #gluster
10:14 poornima_ joined #gluster
10:21 alezzandro joined #gluster
10:21 Gnomethrower joined #gluster
10:30 TvL2386 joined #gluster
10:32 TvL2386 hey guys, I had issues with my glusterfs server a few days ago, which resulted in split-brain. I have solved most split-brain files by using: https://joejulian.name/blog/gluster​fs-split-brain-recovery-made-easy/
10:32 glusterbot Title: GlusterFS Split-Brain Recovery Made Easy (at joejulian.name)
10:33 TvL2386 I had 3 directories which also showed up in the split-brain output (gluster volume heal volume01 info split-brain)
10:33 TvL2386 so I removed the directory from one of the bricks, hoping it would heal on the other, just like all the files did
10:33 TvL2386 however it's not showing up on the brick
10:34 TvL2386 the directory is gone now from the split-brain output
10:39 TvL2386 How would you resolve split-brain of a directory?
10:49 skoduri joined #gluster
10:49 orogor TvL2386, no idea :)
10:49 orogor in this situation , can you write inbother directories that theone in split brain ?
10:49 orogor or the entire volume is unavallable ?
10:51 orogor also, did you had some outage that led to this or , the split brain occurred because different apps on different servers were writting to the same file and network was unable to keep up sync?
10:51 orogor ... i saw in the doc there s an auto resolv option you  can set
10:52 TvL2386 I have a directory now, which is called 27411 and when I want to `ls` it, I get Input/output error
10:52 TvL2386 the volume is available
10:52 TvL2386 gluster volume heal volume01 info lists that the contents are different on both bricks
10:52 TvL2386 one brick has 1 directory, the other has multiple other directories
10:53 TvL2386 I'm not sure what the cause was to this outage...
10:53 orogor gluster volume set $VOLUM cluster.favorite-child-policy mtime
10:53 TvL2386 xfs did print bothersome stuff (looking it up)
10:53 TvL2386 [1806132.119714] XFS: glusterfsd(1515) possible memory allocation deadlock size 65728 in kmem_alloc (mode:0x2408240)
10:54 TvL2386 lots of these entries were shown in `dmesg`
10:55 orogor TvL2386, https://www.centos.org/for​ums/viewtopic.php?t=52412
10:55 glusterbot Title: XFS: possible memory allocation deadlock in kmem_alloc (mode - CentOS (at www.centos.org)
10:56 orogor TvL2386, http://oss.sgi.com/archives​/xfs/2016-05/msg00453.html  he shows the command to show xfs frag
10:56 glusterbot Title: XFS: possible memory allocation deadlock in kmem_alloc (at oss.sgi.com)
10:57 sanoj joined #gluster
11:00 TvL2386 xfs_db> frag       actual 503592, ideal 493773, fragmentation factor 1.95%
11:02 TvL2386 I did make the block size 64k (mkfs.xfs -n size=64k)
11:03 TvL2386 and I mount with inode64 option
11:05 TvL2386 as per this: https://www.gluster.org/pipermail/g​luster-users/2013-July/013421.html (follow the link in there to xfs.org for explaining the 64k)
11:05 glusterbot Title: [Gluster-users] Recommended filesystem for GlusterFS bricks. (at www.gluster.org)
11:06 TvL2386 I checked the memory consumption when it happened and nothing weird was shown there. The servers are dedicated glusterfs and have 4GB RAM
11:09 susant1 joined #gluster
11:09 TvL2386 recovering from split brain really is a pain
11:13 AnkitRaj_ joined #gluster
11:18 TvL2386 so I did: gluster volume heal volume01 split-brain source-brick 10.0.0.10:/glusterfs/disk01/volume01
11:19 TvL2386 and now I get a lot of
11:19 TvL2386 Healing gfid:f4cb1df1-dcbd-4f52-a6fd-7e013f5b92b9 failed:Transport endpoint is not connected.
11:25 skoduri joined #gluster
11:35 ankitraj joined #gluster
11:55 gluytium joined #gluster
11:55 ashka joined #gluster
11:55 sac joined #gluster
11:55 loadtheacc joined #gluster
11:55 Champi joined #gluster
11:55 john51 joined #gluster
11:56 nigelb joined #gluster
11:56 csaba joined #gluster
11:56 bfoster joined #gluster
11:56 nigelb joined #gluster
11:56 SlickNik joined #gluster
11:57 lkoranda joined #gluster
12:06 kotreshhr joined #gluster
12:07 owlbot joined #gluster
12:10 jri joined #gluster
12:20 asriram joined #gluster
12:44 karthik_us joined #gluster
12:46 phileas joined #gluster
13:00 bfoster joined #gluster
13:06 bfoster joined #gluster
13:07 flying joined #gluster
13:08 abyss^ orogor: can't be now because on production we have 3.7 :)
13:09 ira joined #gluster
13:13 kramdoss_ joined #gluster
13:17 mhulsman1 joined #gluster
13:19 orogor abyss^, and you can t migrate from 3.7 to 3.9 ?
13:19 orogor i am still new to gluster ...so reading a lot , didn t looked at the miration procedures yet
13:22 abyss^ orogor: can, but it's not a case for now :)
13:30 mhulsman joined #gluster
13:36 unclemarc joined #gluster
13:40 nbalacha joined #gluster
13:43 susant joined #gluster
13:45 phileas joined #gluster
13:45 saybeano joined #gluster
13:56 primehaxor joined #gluster
13:58 skylar joined #gluster
14:14 RameshN joined #gluster
14:20 ankitraj joined #gluster
14:23 Wizek_ joined #gluster
14:27 msvbhat joined #gluster
14:46 m0zes joined #gluster
14:50 msvbhat joined #gluster
14:57 atinmu joined #gluster
15:01 atinmu joined #gluster
15:05 vbellur joined #gluster
15:10 ankitraj joined #gluster
15:13 Intensity joined #gluster
15:18 squizzi joined #gluster
15:23 samppah joined #gluster
15:23 bwerthmann joined #gluster
15:23 rastar joined #gluster
15:26 rwheeler joined #gluster
15:29 kpease joined #gluster
15:41 TvL2386 joined #gluster
15:59 ivan_rossi joined #gluster
15:59 ivan_rossi left #gluster
16:00 msvbhat joined #gluster
16:05 kxseven joined #gluster
16:08 ankitraj joined #gluster
16:10 farhorizon joined #gluster
16:12 plarsen joined #gluster
16:16 ivan_rossi joined #gluster
16:17 ivan_rossi left #gluster
16:18 msvbhat joined #gluster
16:26 atinm_ joined #gluster
16:32 Marbug joined #gluster
16:34 farhoriz_ joined #gluster
16:35 alvinstarr joined #gluster
16:36 alezzandro joined #gluster
16:45 nishanth joined #gluster
16:46 mhulsman joined #gluster
16:50 skoduri joined #gluster
16:50 ofaq joined #gluster
16:54 unclemarc joined #gluster
16:59 jdossey joined #gluster
17:07 riyas joined #gluster
17:16 kpease joined #gluster
17:19 LiberalCarrot joined #gluster
17:19 LiberalCarrot left #gluster
17:19 vbellur joined #gluster
17:20 Caveat4U joined #gluster
17:20 Caveat4U Good morning all -
17:21 Caveat4U I've been trying to figure out - how does one go about changing the IP address of an active node on a running gluster cluster
17:21 Caveat4U I was thinking just peer probe the new one
17:21 Caveat4U And then peer detach the old one
17:22 Caveat4U Would this cause any issues you think?
17:24 kkeithley distributed volume? replicated volume?  maybe wait for rebalance or heal before you detach the old one.
17:24 Caveat4U Replicated
17:27 Caveat4U No die
17:27 Caveat4U s/die/dice/g
17:27 glusterbot Caveat4U: Error: u's/die/dice/g No die' is not a valid regular expression.
17:28 Caveat4U I run the peer probe
17:28 Caveat4U It says successful
17:28 Caveat4U But the new name doesn't show in the peer list
17:29 Caveat4U I think the issue is that it's the same machine
17:29 Caveat4U And it's running the same brick
17:42 LiberalSquash joined #gluster
17:42 LiberalSquash left #gluster
17:42 prasanth joined #gluster
17:44 ankitraj joined #gluster
17:49 ashiq joined #gluster
17:49 vbellur joined #gluster
17:51 daMaestro joined #gluster
17:54 bbooth joined #gluster
17:55 AnxiousGarlic joined #gluster
17:55 AnxiousGarlic left #gluster
17:56 sanoj joined #gluster
17:58 farhorizon joined #gluster
18:02 ashiq joined #gluster
18:11 skylar joined #gluster
18:22 squizzi joined #gluster
18:30 prasanth joined #gluster
18:34 Caveat4U @JoeJulian Do you know if there's a workaround for this issue? https://bugzilla.redhat.co​m/show_bug.cgi?id=1038866
18:34 glusterbot Bug 1038866: low, unspecified, ---, bugs, NEW , [FEAT] command to rename peer hostname
18:35 Caveat4U I've seen a couple threads where people mention find/replacing all the instances of the hostname on their system
18:35 Caveat4U It still shows as the old hostname but runs great
18:41 ahino joined #gluster
18:49 ahino joined #gluster
18:52 kotreshhr left #gluster
18:52 msvbhat joined #gluster
18:54 Karan joined #gluster
19:01 ahino joined #gluster
19:10 mhulsman joined #gluster
19:11 jdossey joined #gluster
19:21 jiffin joined #gluster
19:43 marlinc joined #gluster
19:49 derjohn_mob joined #gluster
19:50 mhulsman joined #gluster
19:54 gem joined #gluster
19:56 om2 joined #gluster
20:04 farhorizon joined #gluster
20:08 jdossey joined #gluster
20:11 vbellur joined #gluster
20:16 kpease joined #gluster
20:21 bbooth joined #gluster
20:25 primehaxor joined #gluster
20:26 kpease_ joined #gluster
20:29 farhorizon joined #gluster
20:31 shaunm joined #gluster
20:42 farhorizon joined #gluster
20:44 TvL2386 joined #gluster
20:58 unclemarc joined #gluster
21:00 rwheeler joined #gluster
21:01 farhorizon joined #gluster
21:09 ttkg joined #gluster
21:12 vbellur joined #gluster
21:13 buvanesh_kumar joined #gluster
21:28 d0nn1e joined #gluster
21:32 farhorizon joined #gluster
21:34 sage_ joined #gluster
21:57 farhorizon joined #gluster
22:07 pulli joined #gluster
22:14 jdossey joined #gluster
22:21 Caveat4U joined #gluster
22:22 bbooth joined #gluster
22:22 Caveat4U joined #gluster
22:38 arpu joined #gluster
23:01 farhoriz_ joined #gluster
23:39 Wizek_ joined #gluster
23:58 PaulCuzner joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary