Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2016-04-21

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:07 luizcpg joined #gluster
00:37 julim_ joined #gluster
00:39 ItsMe` joined #gluster
00:39 the-me joined #gluster
00:40 armyriad joined #gluster
00:44 dlambrig_ joined #gluster
00:44 xMopxShell joined #gluster
00:46 tg2 joined #gluster
01:15 syadnom joined #gluster
01:20 johnmilton joined #gluster
01:26 vmallika joined #gluster
01:30 EinstCrazy joined #gluster
01:36 MugginsM joined #gluster
01:46 vmallika joined #gluster
01:52 russoisraeli joined #gluster
02:17 harish joined #gluster
02:19 jiffin joined #gluster
02:20 dlambrig_ joined #gluster
02:22 Lee1092 joined #gluster
02:30 hagarth joined #gluster
02:32 Javezim joined #gluster
02:33 Javezim Hey Gluster Peeps, Hoping I could grab some assistance. Anyone have any automation or scripts that they know of to deal with a large amount of Split-Brains in a Gluster Cluster? Going through all the ones we have manually would be too time consuming, so I am hoping there has been some sort of automation developed?
02:33 Javezim Ultimately If it were around Biggest-File being kept on the Cluster that would be great.
02:34 MugginsM don't know of any. would have been handy a year ago when I was having that kinda problem :)
02:34 JoeJulian Javezim: What version are you using?
02:34 Javezim I see someone mentioned one here: https://www.gluster.org/pipermail/g​luster-users/2016-March/025649.html
02:34 glusterbot Title: [Gluster-users] gluster volume heal info split brain command not showing files in split-brain (at www.gluster.org)
02:35 Javezim glusterfs 3.7.10 built on Apr  1 2016 14:20:43
02:35 JoeJulian So how about this script. It's really complicated...
02:36 JoeJulian I probably shouldn't paste it here, it's so long, but hey... people spam the channel at least once a week. It's my turn.
02:36 JoeJulian Ok, here it goes.... are you ready for it?
02:36 JoeJulian gluster volume heal $vol split-brain bigger-file
02:38 Javezim Doesn't this need to be run on every File on all Bricks though?
02:39 JoeJulian Meh, you got me. Looks like you're right. I thought it was supposed to work without specifying a file.
02:40 Javezim Yeah bit of a pain
02:40 Javezim Especially when that keeps failing
02:40 Javezim Because the files are on more than one replicated bricj
02:40 Javezim That's another issue we are facing
02:40 Javezim a File can be on Brick 2 on Node1 and Node 2, and Brick 6 on Node 3 and Node 4
02:40 Javezim Not sure why
02:40 JoeJulian Odd.
02:41 MugginsM I've seen that when a rebalance has failed, but it was rare
02:41 JoeJulian Unless one of those is zero bytes mode 1000.
02:41 Javezim But when we then run that -  gluster volume heal $vol split-brain bigger-file, it returns: Input/output error
02:42 Javezim + The Gluster volume heal <VOL> info split-brain keeps returning directories instead of files, so we get 'bigger-file' not a valid option for directories.
02:45 Javezim So we're in a bit of a jam
02:45 Javezim was just hoping someone had been in this situation prior, and knew of some automation method to resolove
02:45 JoeJulian Meh, directories.
02:46 JoeJulian 1 moment while I script...
02:52 JoeJulian for d in $(find $brick_root -type d | xargs getfattr -m trusted.afr. | egrep -v '^#|^$' | sort -u); do find $brick_root -type d -exec setfattr -x $d {} \; done
02:52 JoeJulian That should remove the afr flags from every directory on the brick. Do that to each brick.
03:01 jobewan joined #gluster
03:06 Javezim So what will that do?
03:08 JoeJulian The way it determines that directories are in split-brain is by looking at trusted.afr attributes. If the same directory on two bricks shows pending updates for each other, it's considered split-brain. This is dumb for directories. They can't have data discrepencies and I've never seen an instance where the metadata doesn't match. Just clear the flag and be done with it.
03:11 ramteid joined #gluster
03:13 MugginsM so is running gluster 3.7.11 client against a pair of 3.6.8 servers likely to cause trouble?
03:13 MugginsM servers are Ubuntu Precise and not due for an upgrade for 6 months
03:19 Javezim Thanks @JoeJulian, Am testing it now :)
03:19 Javezim But still no idea on automation RE: Split Brains? Ie. Process all in a brick to always go with the biggest file, instead of one-by-one
03:29 PaulCuzner joined #gluster
03:33 nbalacha joined #gluster
03:37 Javezim @JoeJulian When I Run it the machines goes into a new line wiht
03:37 Javezim >
03:37 Javezim Like its expecting more output
03:45 overclk joined #gluster
03:50 kdhananjay joined #gluster
03:54 spalai joined #gluster
03:56 hchiramm joined #gluster
03:57 spalai left #gluster
03:59 MugginsM probably a missing '  or )
04:00 Javezim Cant seem to find where though
04:19 nehar joined #gluster
04:24 rafi joined #gluster
04:28 PaulCuzner joined #gluster
04:28 shubhendu joined #gluster
04:29 sloop joined #gluster
04:32 MugginsM joined #gluster
04:42 nishanth joined #gluster
04:42 skoduri joined #gluster
04:50 PaulCuzner joined #gluster
04:51 RameshN joined #gluster
04:53 Javezim @JoeJulian Yeah Really can't figure whats missing from that script and where, any idea?
05:00 gem joined #gluster
05:02 ndarshan joined #gluster
05:05 luizcpg joined #gluster
05:08 luizcpg joined #gluster
05:09 poornimag joined #gluster
05:11 luizcpg joined #gluster
05:13 ashiq joined #gluster
05:14 luizcpg joined #gluster
05:17 luizcpg joined #gluster
05:19 hgowtham joined #gluster
05:25 Apeksha joined #gluster
05:27 rafi joined #gluster
05:29 aravindavk joined #gluster
05:29 vmallika joined #gluster
05:30 luizcpg joined #gluster
05:33 aspandey joined #gluster
05:34 karthik___ joined #gluster
05:42 mhulsman joined #gluster
05:51 Gnomethrower joined #gluster
05:53 kshlm joined #gluster
05:56 rouven joined #gluster
06:16 ppai joined #gluster
06:28 Bhaskarakiran joined #gluster
06:29 karthik___ joined #gluster
06:32 spalai joined #gluster
06:33 jtux joined #gluster
06:33 kdhananjay joined #gluster
06:35 anil joined #gluster
06:37 Gnomethrower joined #gluster
06:38 rastar joined #gluster
06:42 harish joined #gluster
06:42 fsimonce joined #gluster
06:42 arcolife joined #gluster
06:43 armyriad joined #gluster
06:47 hchiramm joined #gluster
06:52 kovshenin joined #gluster
06:59 rouven joined #gluster
07:00 Wizek joined #gluster
07:00 jwd joined #gluster
07:12 deniszh joined #gluster
07:14 wnlx_ joined #gluster
07:14 jri joined #gluster
07:14 pur_ joined #gluster
07:19 [Enrico] joined #gluster
07:21 alghost joined #gluster
07:28 [diablo] joined #gluster
07:29 Manikandan joined #gluster
07:32 Wizek joined #gluster
07:38 ahino joined #gluster
07:40 [Enrico] joined #gluster
07:41 ivan_rossi joined #gluster
07:53 spalai joined #gluster
08:18 DV__ joined #gluster
08:31 skoduri joined #gluster
08:34 spalai joined #gluster
08:42 RameshN joined #gluster
08:44 spalai joined #gluster
08:59 dieter joined #gluster
09:05 EinstCrazy joined #gluster
09:07 yosafbridge joined #gluster
09:11 mhulsman joined #gluster
09:14 Manikandan_ joined #gluster
09:15 dieter Hi all
09:16 hackman joined #gluster
09:20 dieter I'll be short and clear. Is there a solution for the following experienced issue :
09:21 dieter untarring (so file by file actions) a archive of 80M (with 12.000 "small" files in it), untarring to a glusterfs client volume (with replicated volumes)
09:22 dieter takes a - long - time. (4 to 5 minutes)
09:22 dieter Is there a solution / are there tweaking options which would help us to tackle this?
09:22 RameshN joined #gluster
09:23 ctria joined #gluster
09:24 dieter == single thread
09:40 Manikandan_ joined #gluster
09:40 Saravanakmr joined #gluster
09:40 paul98 joined #gluster
09:42 paul98 hi wonder if someone can help, just installed glusterfs on two centos servers, setup bricks / volumes etc and follow the getting started page, but it's not syncing between the two, looked at all the error logs and nothing is being produced, i'm on version 3.7.11, when i run gluster volume info i get that it has a volume id, it's started, the bricks with ip / pafh, says prefomance.reddir-ahead: on
09:42 paul98 but it's not syncing between the two
09:44 Norky_ joined #gluster
09:53 johnmilton joined #gluster
09:58 Slashman joined #gluster
10:22 armyriad joined #gluster
10:34 shubhendu joined #gluster
10:34 kshlm joined #gluster
10:37 marbu joined #gluster
10:46 paul98 any one?
10:51 samppah hey paul98
10:52 paul98 hey samppah
10:52 samppah just to make sure, have you mounted glusterfs volume?
10:52 paul98 let me explain, so i had parition mounted orginally on the server which i've used the hole space, i then mounted glusterfs
10:53 paul98 what i've noticed though when i run gluster volume status gv0 it shows as nfs server as N on one of the hosts as local
10:53 paul98 i assume this is a issue?#
10:54 samppah can you send output of gluster vol info and gluster vol status to pastie.org?
10:55 paul98 http://pastie.org/10806654
10:55 glusterbot Title: #10806654 - Pastie (at pastie.org)
10:59 paul98 samppah: ^^
10:59 nbalacha joined #gluster
11:00 samppah paul98: thanks
11:01 samppah paul98: that looks correct.. have you mounted gluster volume from client with mount -t glusterfs server:/gv /mnt/point?
11:01 paul98 well
11:01 paul98 between the two
11:02 paul98 all i done was put a file in the /data dir
11:02 paul98 and would assume it would then just sync between the two servers?
11:02 mattmcc_ joined #gluster
11:03 samppah paul98: you have to mount it to some other point and access through that
11:03 morse joined #gluster
11:03 samppah paul98: of course you can mount it on both servers too
11:03 paul98 so i can't just drop a file in the /data
11:04 samppah nope
11:04 paul98 then gluster does it's bit in the back ground
11:04 paul98 thats a bit :(
11:04 samppah all (at least most of the) magic happens on client side
11:04 pdrakewe_ joined #gluster
11:04 paul98 i was hoping it done it on server side lol!
11:04 samppah well you can always mount it with mount -t glusterfs server:/gv /mnt/data
11:05 samppah they are working on new style replication which happens pretty much on server side.. but it's not quite there yet afaik
11:06 frakt_ joined #gluster
11:06 pur__ joined #gluster
11:06 paul98 ok
11:07 [1]akay joined #gluster
11:07 paul98 so from my client i would do mount -t glusterfs ip:/gv0 /glusterfs (just done mkdir /glusterfs on local machine)
11:07 jlp1 joined #gluster
11:09 rouven joined #gluster
11:09 muneerse joined #gluster
11:10 harish joined #gluster
11:12 harish joined #gluster
11:13 samppah paul98: yeah, that's correct =)
11:14 paul98 paul@paul-ThinkPad-T430:~$ sudo mount -t glusterfs 192.168.101.47:/gv0 /gluster/
11:14 paul98 mount: unknown filesystem type 'glusterfs'
11:14 gem joined #gluster
11:15 nehar joined #gluster
11:17 paul98 is there a glusterfs client then?
11:18 samppah paul98: it should be glusterfs-fuse package if you are using rhel based distribution
11:18 paul98 urm ubunutu on this client
11:19 samppah hmm
11:19 dlambrig_ joined #gluster
11:20 xMopxShell joined #gluster
11:20 paul98 it's glustefs-client ;)
11:20 Debloper joined #gluster
11:21 arcolife joined #gluster
11:23 paul98 samppah: does it work over nfs
11:23 paul98 i need to get isci / windows working next on it!
11:24 samppah paul98: it should work over nfs too.. however i'd recommend looking at nfs ganesha instead of using built in nfs server http://blog.gluster.org/2014/09/glu​sterfs-and-nfs-ganesha-integration/
11:24 glusterbot Title: GlusterFS and NFS-Ganesha integration | Gluster Community Website (at blog.gluster.org)
11:25 hgichon joined #gluster
11:25 paul98 yes it works :d
11:25 paul98 samppah: thanks for your help! have a good day!
11:25 samppah paul98: no problem! have a good day too! :)
11:25 paul98 infact one more thing
11:25 paul98 so on a mount
11:26 johnmilton joined #gluster
11:26 paul98 from my client i do ls i see files etc, that is just listing what is on the mounts ?
11:26 paul98 then if i was open said file it would then download it
11:26 paul98 e.g i've dropped a 1gb file ont the glusterfs which i can see on both servers
11:26 paul98 then on client i see it listed
11:29 amye joined #gluster
11:29 samppah paul98: client access files through mountpoint from servers
11:29 paul98 yup makes sense, just getting head round it / writing documents
11:29 samppah so if you do cat filename for example it loads whole file from server
11:30 samppah good :)
11:30 paul98 makes sense
11:30 paul98 i'm impressed, be looking forward to it being done server side though!
11:44 wnlx joined #gluster
11:45 coredump|br joined #gluster
11:49 mowntan joined #gluster
11:49 mowntan joined #gluster
11:49 mowntan joined #gluster
11:50 mowntan joined #gluster
11:50 mowntan joined #gluster
11:50 nottc joined #gluster
11:55 nehar joined #gluster
11:56 paul98 samppah: does it matter that it shows nfs as being N on one host
11:56 paul98 but other one is working?
12:01 chirino joined #gluster
12:04 marbu joined #gluster
12:10 vmallika joined #gluster
12:12 arcolife joined #gluster
12:14 Manikandan_ joined #gluster
12:17 dieter exit
12:17 dieter :-/
12:18 dieter bye :-)
12:27 mjrosenb joined #gluster
12:30 amye joined #gluster
12:33 mjrosenb joined #gluster
12:36 social joined #gluster
12:37 Apeksha joined #gluster
12:41 marbu joined #gluster
12:58 plarsen joined #gluster
13:02 unclemarc joined #gluster
13:07 mpietersen joined #gluster
13:19 ahino1 joined #gluster
13:23 EinstCrazy joined #gluster
13:24 luizcpg joined #gluster
13:56 russoisraeli joined #gluster
13:56 shubhendu joined #gluster
13:57 ahino joined #gluster
14:05 TvL2386 joined #gluster
14:09 harish joined #gluster
14:10 dlambrig_ joined #gluster
14:16 jlp1 joined #gluster
14:26 TvL2386 joined #gluster
14:28 paul98 samppah:  you there/ one more question
14:33 jwd joined #gluster
14:34 Caveat4U joined #gluster
14:39 skylar joined #gluster
14:43 shubhendu joined #gluster
14:47 Caveat4U @JoeJulian
14:48 Caveat4U Good morning
14:48 Caveat4U Here is our basic info: http://paste.fedoraproject.org/358163/25003014/
14:48 glusterbot Title: #358163 Fedora Project Pastebin (at paste.fedoraproject.org)
14:48 Debloper joined #gluster
14:49 Caveat4U We have an unsynced entry
14:49 Caveat4U When trying to heal, I get “Launching heal operation to perform index self heal on volume nmd has been unsuccessful on bricks that are down. Please check if all brick processes are running."
14:58 Wizek joined #gluster
15:00 coredump|br joined #gluster
15:06 Caveat4U I solved it myself - I ended up just writing some dummy data tot he file that was having issues and gluster fixed itself
15:07 paul98 hmm, what happens if you shut a server down which is hosting a brick, should it carry on the other server with no issues?
15:07 Caveat4U paul98: Are you using a replicated style?
15:07 paul98 yes
15:08 netzapper joined #gluster
15:08 kpease joined #gluster
15:08 netzapper after moving the gluster volume to completely separate servers, we still get the frozen processes stuck in `unlock_page`. I'm beginning to suspect a defect in the FUSE module.
15:09 Caveat4U What I’ve found is that as long as the brick’s replicated partner is online, you can safely just power cycle one brick at a time
15:10 amye joined #gluster
15:10 paul98 hmm cause i'm mapped from client to server1, but then i rebooted server2 but when i done a ls on the client map i couldn't list the contents
15:11 Caveat4U I would wait for the experienced to share - that has just been my experience on my own gluster cluster
15:11 paul98 Caveat4U: no worries, thanks for the input, I would assume it would carry on working
15:12 * Caveat4U shrugs
15:12 paul98 but then how would my client know to look at other server as it's mapped to server2 ip
15:12 paul98 cause i know the boss is going to go well, this isn't very ha is it. lol
15:28 Wizek joined #gluster
15:29 dlambrig_ joined #gluster
15:32 wushudoin joined #gluster
15:47 armyriad joined #gluster
15:48 julim joined #gluster
15:55 paul98 welcome CyrilPeponnet
15:55 wnlx joined #gluster
15:58 Caveat4U joined #gluster
16:00 aspandey joined #gluster
16:02 squizzi_ joined #gluster
16:16 Caveat4U joined #gluster
16:19 hagarth joined #gluster
16:24 samppah paul98: are you using native glusterfs or nfs to mount volume?
16:26 paul98 samppah: i'm using native glusterfs
16:27 samppah paul98: it should then continue with other server but there is delay of 42 seconds by default.. it waits for that time if the server is coming back before it drops connection to it
16:28 paul98 ah ok
16:28 paul98 i'l try it tomorrow and disable netork and see how it responds
16:28 paul98 although would have thought the reboot would be slower then 42 seconds or so.
16:31 anil joined #gluster
16:34 shubhendu joined #gluster
16:45 russoisraeli joined #gluster
16:51 Caveat4U joined #gluster
16:55 rafi joined #gluster
16:58 Ryan___ joined #gluster
16:59 chirino_m joined #gluster
17:00 rafi joined #gluster
17:00 rjoseph joined #gluster
17:01 rastar joined #gluster
17:02 unforgiven512 joined #gluster
17:02 alghost_ joined #gluster
17:02 netzapper_ joined #gluster
17:02 suliba_ joined #gluster
17:04 skylar joined #gluster
17:04 JonathanD joined #gluster
17:06 unclemarc joined #gluster
17:08 ramky joined #gluster
17:08 devilspgd_ joined #gluster
17:09 Champi joined #gluster
17:09 chirino joined #gluster
17:10 coredump|br joined #gluster
17:10 mattmcc joined #gluster
17:11 PotatoGim joined #gluster
17:12 purpleidea joined #gluster
17:13 d-fence_ joined #gluster
17:15 paul98_ joined #gluster
17:15 ivan_rossi left #gluster
17:16 lezo_ joined #gluster
17:17 shortdudey123_ joined #gluster
17:19 mmckeen joined #gluster
17:19 pdrakeweb joined #gluster
17:22 johnmilton joined #gluster
17:27 karthikus joined #gluster
17:31 kpease joined #gluster
17:45 ahino joined #gluster
17:50 hagarth joined #gluster
17:55 dlambrig_ joined #gluster
17:57 hackman joined #gluster
17:57 bennyturns joined #gluster
18:17 syadnom hi guys....weird question maybe, google doesn't understand me...
18:17 syadnom does it make any sence to setup separate drives as separate bricks?
18:18 Caveat4U joined #gluster
18:18 syadnom ie, I have 4 enclosures with atom cpu's and 4 SATA bays each.  Better to put the bays in a raid and add that as a brick, or configure 16 bricks?  I just don't know if I can have gluster 'group' bricks so replicas are on different servers...
18:19 syadnom I want to get away from having to raid the drives, adds a layer of compexity for failures.  I don't want to be rebuilding raid if a drive goes down...  rather 'jbod' in software somehow (aufs?? multiple bricks?? idk)
18:20 jobewan joined #gluster
18:27 bennyturns joined #gluster
18:44 hagarth left #gluster
18:44 nathwill joined #gluster
18:49 brandon_ joined #gluster
18:58 amye joined #gluster
19:14 squizzi_ joined #gluster
19:19 dlambrig_ joined #gluster
19:20 bennyturns joined #gluster
19:28 bennyturns joined #gluster
19:48 JoeJulian syadnom: Sure it makes sense. The only other thing you may wish to consider is throughput. As jbod your disk i/o speeds will be your bottleneck. Once you get past a certain number of disks, it might be advantageous to raid0 two to four disks to make a brick, returning the bottleneck to the network.
19:53 jsellens joined #gluster
19:59 squizzi_ joined #gluster
19:59 jsellens hello - having a weird very high load average problem - small, 2 nodes, gluster 3.6, fine for months, yesterday load average went to 40+ and things weren't very usable.  Had to shut down one node.  I might suspect the hardware, but it appears ok.  Anyone seen anything similar?
20:01 skylar joined #gluster
20:01 JoeJulian I've seen lots of things similar. Check logs in /var/log/glusterfs/{,bricks}/*, dmesg, etc. Check gluster volume heal $volname info
20:02 JoeJulian If it was due to a healing event, shutting down one replica will only exacerbate the problem.
20:03 JoeJulian You might consider turning off client-side self-heals. volume set $vol cluster.data-self-heal off
20:11 jsellens thanks for the suggestions - it was healed up fine late last night, and then went nuts at 3:30am (of course).  I see some connection errors and a few other things but I suspect they may be load related.
20:11 jsellens I'll try turning off self heal and staing closer at the logs in the off hours tonight - thanks!
20:12 DV_ joined #gluster
20:12 JoeJulian If you find something, feel free to ask here about it.
20:12 jsellens And yes - I didn't want to shut one down, but with nothing working, people were getting cranky. :-)
20:12 JoeJulian Been there, done that.
20:15 jobewan joined #gluster
20:33 nathwill joined #gluster
20:59 bluenemo joined #gluster
21:03 wushudoin joined #gluster
21:14 jiffin joined #gluster
21:16 robb_nl joined #gluster
21:20 robb_nl joined #gluster
21:23 amye joined #gluster
21:43 Caveat4U joined #gluster
21:44 wushudoin joined #gluster
22:07 DV_ joined #gluster
22:09 dlambrig_ joined #gluster
22:30 MugginsM joined #gluster
22:34 Wizek joined #gluster
22:39 MugginsM joined #gluster
23:06 Wizek joined #gluster
23:12 plarsen joined #gluster
23:55 Caveat4U joined #gluster
23:58 RameshN joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary