Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-10-07

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:18 scubacuda joined #gluster
00:32 zhangjn joined #gluster
01:08 RedW joined #gluster
01:20 vimal joined #gluster
01:22 harish_ joined #gluster
01:51 JoeJulian left #gluster
01:51 JoeJulian joined #gluster
01:53 Lee1092 joined #gluster
01:54 harish joined #gluster
01:58 nangthang joined #gluster
02:00 zhangjn joined #gluster
02:04 aaronott joined #gluster
02:07 zhangjn joined #gluster
02:07 Pupeno joined #gluster
02:17 haomaiwa_ joined #gluster
02:25 bdiehr @ppa
02:25 glusterbot bdiehr: The official glusterfs packages for Ubuntu are available here: 3.5: http://goo.gl/6HBwKh 3.6: http://goo.gl/XyYImN 3.7: https://goo.gl/aAJEN5 -- See more PPAs for QEMU with GlusterFS support, and GlusterFS QA releases at https://launchpad.net/~gluster -- contact semiosis with feedback
02:30 haomaiwa_ joined #gluster
02:39 haomaiwang joined #gluster
02:57 badone_ joined #gluster
03:03 haomaiwa_ joined #gluster
03:10 dlambrig_ joined #gluster
03:19 nangthang joined #gluster
03:23 atinm joined #gluster
03:37 [7] joined #gluster
03:38 stickyboy joined #gluster
03:42 ramteid joined #gluster
03:48 dusmant joined #gluster
03:51 bdiehr @ppa
03:51 glusterbot bdiehr: The official glusterfs packages for Ubuntu are available here: 3.5: http://goo.gl/6HBwKh 3.6: http://goo.gl/XyYImN 3.7: https://goo.gl/aAJEN5 -- See more PPAs for QEMU with GlusterFS support, and GlusterFS QA releases at https://launchpad.net/~gluster -- contact semiosis with feedback
03:59 nbalacha joined #gluster
04:02 neha_ joined #gluster
04:04 bdiehr If i plan on doing using EBS for the file storage for Gluster, would I then attach the EBS to all peers or just the master?
04:04 bdiehr (aws EBS storage)
04:09 badone joined #gluster
04:10 shubhendu joined #gluster
04:26 kanagaraj joined #gluster
04:27 kotreshhr joined #gluster
04:28 chirino joined #gluster
04:31 gem joined #gluster
04:50 Saravana_ joined #gluster
04:50 atalur joined #gluster
04:57 maveric_amitc_ joined #gluster
04:58 ndarshan joined #gluster
04:59 RameshN joined #gluster
05:00 ramky joined #gluster
05:02 pppp joined #gluster
05:03 aravindavk joined #gluster
05:09 hgowtham joined #gluster
05:10 rafi joined #gluster
05:25 Bhaskarakiran joined #gluster
05:27 Apeksha joined #gluster
05:28 auzty joined #gluster
05:31 ppai joined #gluster
05:31 shaunm joined #gluster
05:31 hagarth joined #gluster
05:33 haomaiwa_ joined #gluster
05:35 Bhaskarakiran joined #gluster
05:36 zhangjn joined #gluster
05:40 arcolife joined #gluster
05:41 arcolife joined #gluster
05:41 Manikandan joined #gluster
05:43 kdhananjay joined #gluster
05:45 hchiramm joined #gluster
05:48 hagarth joined #gluster
05:50 ashiq joined #gluster
05:50 nishanth joined #gluster
05:56 karnan joined #gluster
05:59 ashka joined #gluster
06:01 haomaiwang joined #gluster
06:04 jtux joined #gluster
06:09 Pupeno joined #gluster
06:09 dusmant joined #gluster
06:17 dlambrig_ joined #gluster
06:21 mhulsman joined #gluster
06:30 nangthang joined #gluster
06:33 anil joined #gluster
06:34 anil joined #gluster
06:35 LebedevRI joined #gluster
06:37 mbukatov joined #gluster
06:40 skoduri joined #gluster
06:46 sadbox joined #gluster
06:51 vmallika joined #gluster
06:57 ramky joined #gluster
06:57 nbalacha joined #gluster
06:57 yazhini joined #gluster
07:01 haomaiwa_ joined #gluster
07:15 Raide joined #gluster
07:20 [Enrico] joined #gluster
07:21 kayn joined #gluster
07:24 Raide joined #gluster
07:30 nbalacha joined #gluster
07:31 Raide joined #gluster
07:38 Raide joined #gluster
07:44 fsimonce joined #gluster
07:55 mufa joined #gluster
07:56 mufa left #gluster
07:59 Raide joined #gluster
08:00 mufa joined #gluster
08:01 haomaiwa_ joined #gluster
08:01 ramky joined #gluster
08:05 kovshenin joined #gluster
08:19 fsimonce joined #gluster
08:21 [Enrico] joined #gluster
08:21 kayn Hey guys, could someone explain me what I can expect during the full heal process? I see that some files dissapeared and they are back again.
08:22 Trefex joined #gluster
08:27 arcolife joined #gluster
08:30 kayn I also would like to know what will happen to gluster when some files are removed during the heal process or if glusters will go out of space during the heal. I tried to removed some files to save some space but these files weren't removed from gluster nodes....
08:31 nbalacha joined #gluster
08:37 Raide joined #gluster
08:41 Raide joined #gluster
08:50 dusmant joined #gluster
08:50 nishanth joined #gluster
09:01 7GHABCXV0 joined #gluster
09:04 Slashman joined #gluster
09:08 jwd joined #gluster
09:09 Raide joined #gluster
09:12 dusmant joined #gluster
09:17 Raide joined #gluster
09:23 rafi joined #gluster
09:24 cmorandin joined #gluster
09:39 stickyboy joined #gluster
09:41 GB21 joined #gluster
09:41 GB21_ joined #gluster
09:43 hos7ein joined #gluster
09:44 Raide joined #gluster
10:01 haomaiwang joined #gluster
10:02 kdhananjay joined #gluster
10:03 kdhananjay joined #gluster
10:11 Saravana_ joined #gluster
10:14 ccoffey I have a 8 peer replicate-distribute peer setup. The stroage bricks needed to rebuilt on one host. The peer was not rebuilt. How do  I bring this back into the volume? Gluster 3.6.4.
10:15 rafi ccoffey: you want to bring back one brick ?
10:16 ccoffey In reality, 5 bricks, but will do 1 at a time
10:17 ccoffey we had some issues before where self heal exhausted memory and cpu
10:18 rafi ccoffey: why do you want to bring one by one ?
10:19 rafi ccoffey: gluster v start volname force will try to start *all* the bricks which is not running?
10:19 ccoffey I'm making assumptions it would be less taxing on IO. I can bring all back, but would prefer the process to take a longer time and be lighter on IO
10:20 rafi ccoffey: I don't think starting all the brick will impact the i/o
10:22 mdavidson joined #gluster
10:27 ccoffey is there a procedure to reintroduce a peer safely? the bricks have been wiped, but not the OS
10:34 ws2k3 joined #gluster
10:46 hagarth joined #gluster
10:48 ru57y joined #gluster
10:50 Ru57y joined #gluster
10:52 zhangjn joined #gluster
10:56 overclk joined #gluster
10:56 RayTrace_ joined #gluster
11:01 haomaiwa_ joined #gluster
11:11 Raide joined #gluster
11:15 DRoBeR joined #gluster
11:19 harish_ joined #gluster
11:30 RayTrace_ joined #gluster
11:31 HemanthaSKota joined #gluster
11:35 firemanxbr joined #gluster
11:41 anil joined #gluster
11:43 ccoffey does anyone know what the default value for background-self-heal-count is?
11:43 ccoffey oh, maybe 16
11:52 Saravana_ joined #gluster
11:53 rjoseph joined #gluster
12:02 jdarcy joined #gluster
12:03 Raide joined #gluster
12:04 deniszh joined #gluster
12:09 morse joined #gluster
12:09 ParsectiX joined #gluster
12:10 ParsectiX Hello guys. Can I setup gluster peering and create volumes behind floating ips?
12:11 dpetrov joined #gluster
12:11 dpetrov hi guys
12:11 dpetrov I need some assistance with glusterfs imlpementation
12:12 dpetrov it's pretty straight-forward implementation between two servers
12:12 dpetrov where they both share same folder.
12:12 dpetrov two bricks in replicate mode volume
12:12 dpetrov implemented on Debian 8.1
12:13 dpetrov my problem is that one of the servers got reloaded
12:13 dpetrov and there are two issues now
12:14 dpetrov 1. on the client machine I am unable to perform any gluster commands (e.g. gluster volume info or gluster peers status(
12:15 dpetrov 2. sometimes when copying some files to one of the machines, the other one see the files within the dir, but they're zero-byte sized
12:15 Saravana_ joined #gluster
12:15 Raide joined #gluster
12:15 dpetrov so I have to manually touch them to invoke the gluster healing on those
12:15 dpetrov if anyone is keen on helping me - please, give me a shout and I'll provide all the other details
12:16 dpetrov this is basically the symptoms of issue #1:
12:16 dpetrov # gluster volume info
12:16 dpetrov Connection failed. Please check if gluster daemon is operational.
12:17 Raide joined #gluster
12:17 dpetrov and that's a snipped from the logs:
12:17 dpetrov
12:17 dpetrov [2015-10-07 12:17:07.335809] I [socket.c:3077:socket_submit_request] 0-glusterfs: not connected (priv->connected = 0)
12:17 dpetrov [2015-10-07 12:17:07.335852] W [rpc-clnt.c:1542:rpc_clnt_submit] 0-glusterfs: failed to submit rpc-request (XID: 0x1 Program: Gluster CLI, ProgVers: 2, Proc: 5) to rpc-transport (glusterfs)
12:19 ppai joined #gluster
12:19 jtux joined #gluster
12:19 neha_ joined #gluster
12:19 kovsheni_ joined #gluster
12:20 dlambrig left #gluster
12:21 Raide joined #gluster
12:22 unclemarc joined #gluster
12:26 HemanthaSKota joined #gluster
12:27 Raide joined #gluster
12:28 zhangjn joined #gluster
12:29 Jules- joined #gluster
12:29 Jules- Hey guys
12:30 Jules- can somebody tell me what this means and if i lost something if this warning occurs: [2015-10-07 11:10:02.116885] W [nfs3.c:1230:nfs3svc_lookup_cbk] 0-nfs: 96281a15: /www/blahblah/www/public/system/settings/config.inc.php => -1 (Stale NFS file handle)
12:34 julim joined #gluster
12:35 anil joined #gluster
12:36 mpietersen joined #gluster
12:38 pg joined #gluster
12:39 zhangjn joined #gluster
12:40 zhangjn joined #gluster
12:45 zhangjn joined #gluster
12:47 klaxa|work joined #gluster
12:47 Raide joined #gluster
12:48 zhangjn joined #gluster
12:52 chirino joined #gluster
12:54 luis_silva joined #gluster
12:54 cabillman joined #gluster
12:54 deniszh joined #gluster
12:59 Simmo joined #gluster
12:59 Simmo Hi Guys! : )
13:00 Simmo Another (likely dummy) question :-/  .  Let's assume that finally I have my volume running mounted on /data/myapplication . Of course, that folder is initially empty.
13:00 hchiramm joined #gluster
13:01 Simmo But I have the tons of file to syncronized into /mnt/originaldata
13:02 Simmo Do I have other choice than copy the files from /mnt/originaldata to /data/myapplication to start to sync with GFS ? :-/
13:02 Simmo Is there a way to mount directy /mnt/originaldata without loosing the pre-existing files?
13:04 ndarshan joined #gluster
13:05 spcmastertim joined #gluster
13:06 overclk joined #gluster
13:07 kotreshhr joined #gluster
13:07 kotreshhr left #gluster
13:08 Philambdo joined #gluster
13:10 shyam joined #gluster
13:14 coredump joined #gluster
13:16 DV__ joined #gluster
13:17 Manikandan joined #gluster
13:18 shubhendu joined #gluster
13:18 shyam joined #gluster
13:20 Leildin joined #gluster
13:24 bennyturns joined #gluster
13:26 Raide joined #gluster
13:31 nbalacha joined #gluster
13:33 muneerse joined #gluster
13:38 skylar joined #gluster
13:41 Raide joined #gluster
13:44 Simmo mao :-)
13:44 Saravana_ joined #gluster
13:47 jamesc joined #gluster
13:48 harold joined #gluster
13:50 dlambrig joined #gluster
13:52 zhangjn joined #gluster
13:53 zhangjn joined #gluster
13:54 zhangjn joined #gluster
13:55 EinstCrazy joined #gluster
14:04 ppai joined #gluster
14:05 DV joined #gluster
14:09 plarsen joined #gluster
14:09 ira joined #gluster
14:11 GB21 joined #gluster
14:21 dlambrig_ joined #gluster
14:23 overclk joined #gluster
14:29 taolei joined #gluster
14:32 taolei Hi, If I have 4 nodes, and create a Disperse volume with 8 (6+2) bricks, will the 2 redundency bricks be created on differenct nodes automatically?
14:34 poornimag joined #gluster
14:35 hagarth taolei: the order of bricks specified determine the redundancy bricks (last two in this case)
14:37 taolei @hagarth: thank you very much, I'll try it.
14:38 dpetrov hagarth, if you have a minute ..
14:38 dpetrov can you suggest what migth be the problem with my setup ..
14:39 coredump joined #gluster
14:39 dpetrov I have two boxes, configured as replicate
14:40 dpetrov after a reload, the reloaded machine is unable to fetch information about the volume
14:40 dpetrov but the glusterfs sync seems to be working
14:40 dpetrov this is what I get in the logs
14:40 dpetrov 0-glusterfs: transport.address-family not specified. Could not guess default value from (remote-host:(null) or transport.unix.connect-path:(null)) options
14:40 dpetrov [2015-10-07 14:31:09.693791] I [socket.c:2255:socket_event_handler] 0-transport: disconnecting now
14:40 julim joined #gluster
14:40 anil joined #gluster
14:40 dpetrov root@rommgt001:/etc/glusterfs# gluster vol inf
14:40 dpetrov Connection failed. Please check if gluster daemon is operational.
14:41 hagarth dpetrov: is glusterd running on rommgt001?
14:41 dpetrov it is, yes
14:41 dpetrov but this is the bit I am slightly confused about
14:41 hagarth dpetrov: can you check if there are any errors in glusterd's log file?
14:41 dpetrov in this case, should I have an identical setup of "server"
14:41 dpetrov yep.. 1 sec
14:41 dpetrov so I am doing that: root@rommgt001:~# tail -n 100  -f /var/log/glusterfs/*.log
14:42 csim_ joined #gluster
14:42 dpetrov and when I try to fetch volume information
14:42 dpetrov I only see that in logs: [2015-10-07 14:42:02.782635] I [socket.c:3077:socket_submit_request] 0-glusterfs: not connected (priv->connected = 0)
14:42 dpetrov also, on the remote server I see this:
14:43 hagarth is this log message seen in cli.log?
14:44 dpetrov Yep
14:44 dpetrov http://pastebin.com/VXw82S0Y
14:44 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
14:45 dpetrov hagarth: can you please tell me if I am supposed to have the glusterd.vol configured on both servers
14:45 dpetrov or its like clients/server and that needs to be configured on one of the machines only?
14:45 hagarth dpetrov: yes, both servers need glusterd.vol
14:45 dpetrov I am a bit confisued on that ..
14:46 dpetrov because bear in mind, that happened when I reloaded the machine
14:46 hagarth dpetrov: can you paste last few lines from glusterd.log of rom?
14:46 dpetrov sure
14:46 maserati joined #gluster
14:46 dpetrov [2015-10-07 10:42:57.258556] W [socket.c:523:__socket_rwv] 0-glusterfs: readv on 127.0.0.1:24007 failed (No data available)
14:46 dpetrov [2015-10-07 10:42:57.633316] W [glusterfsd.c:1095:cleanup_and_exit] (-->/lib/i386-linux-gnu/i686/cmov/libc.so.6(clone+0x5e) [0xb701662e] (-->/lib/i386-linux-gnu/i686/cmov/libpthread.so.0(+0x6efb) [0xb72a8efb] (-->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xc9) [0x804e089]))) 0-: received signum (15), shutting down
14:46 glusterbot dpetrov: ('s karma is now -110
14:46 glusterbot dpetrov: ('s karma is now -111
14:46 glusterbot dpetrov: ('s karma is now -112
14:47 dpetrov these are the last few messages
14:47 dpetrov but they're too old
14:47 dpetrov from about 5h ago
14:47 togdon joined #gluster
14:47 GB21_ joined #gluster
14:48 hagarth dpetrov: can you provide the output of service glusterd status on rom?
14:48 moogyver joined #gluster
14:48 dpetrov http://paste.ubuntu.com/12705047/
14:48 glusterbot Title: Ubuntu Pastebin (at paste.ubuntu.com)
14:49 overclk joined #gluster
14:50 ninkotech_ joined #gluster
14:50 hagarth dpetrov: can you restart glusterd on rom?
14:51 shortdudey123 joined #gluster
14:52 [o__o] joined #gluster
14:56 csim joined #gluster
14:57 amye joined #gluster
14:58 ayma joined #gluster
15:02 overclk joined #gluster
15:07 theron joined #gluster
15:10 bluenemo joined #gluster
15:14 nishanth joined #gluster
15:20 Pupeno_ joined #gluster
15:24 Pupeno__ joined #gluster
15:25 dgandhi (++
15:25 glusterbot dgandhi: ('s karma is now -111
15:25 unicky o.o
15:25 EinstCrazy joined #gluster
15:27 JoeJulian lol
15:27 bdiehr I'm having issues mounting my gluster storage to a mount point, my fstab entry is:
15:27 bdiehr 10.0.0.221:/mnt/storage      /home/gluster           glusterfs       defaults,_netdev       0 0
15:28 bdiehr 10.0.0.221 is the ip address of the gluster master, and i'm adding this entry on the gluster master as well
15:29 CyrilPeponnet Hi guys, My gluster setup is still spiking to 2400% CPU nothing relevant in logs
15:29 bdiehr some additional details: https://paste.fedoraproject.org/275942/42317711/
15:29 glusterbot Title: #275942 Fedora Project Pastebin (at paste.fedoraproject.org)
15:29 JoeJulian CyrilPeponnet: What version?
15:30 CyrilPeponnet 3.6.5
15:30 JoeJulian Does everything keep working?
15:30 CyrilPeponnet was working fine before but since few days it's getting mad
15:30 CyrilPeponnet yes but freaking slow
15:30 CyrilPeponnet especially for write
15:30 CyrilPeponnet or dir listing
15:31 CyrilPeponnet I mean vm are working fine but copying a file of 1k take like 10s
15:31 JoeJulian bdiehr: The mount seems to exist but it is not connected. Check your client log.
15:31 CyrilPeponnet I have some profiling info
15:31 bdiehr is there a glusterfs client log?
15:31 Pupeno joined #gluster
15:31 Pupeno joined #gluster
15:32 JoeJulian bdiehr: /var/log/glusterfs/${mnt_mountpoint}.log
15:33 CyrilPeponnet we have a vol in replica 3 read only, and 2 other vols not replicated not distributed for write
15:33 CyrilPeponnet (sorry replica 2)
15:33 CyrilPeponnet as one of the node is spiking to 2400%CPU I assume this slowing down the read
15:34 CyrilPeponnet (but write are still good, at least our vm are working fine)
15:34 kanagaraj joined #gluster
15:34 JoeJulian @learn logs as All logs are under /var/log/glusterfs. glusterd (management daemon) is etc-glusterfs-glusterd.vol.log. Clients are named based on the mountpoint with '/' translated to '-'. Bricks are under the bricks subdirectory. Self-heal daemon and nfs are glustershd.log and nfs.log respectively.
15:34 CyrilPeponnet but listing folder even with 10 files take like for ever, and writing little file take lot of time
15:34 glusterbot JoeJulian: The operation succeeded.
15:35 JoeJulian CyrilPeponnet: What filesystem on the bricks?
15:35 CyrilPeponnet client are using fuse
15:35 CyrilPeponnet xfs
15:35 bdiehr thanks joe!
15:35 overclk joined #gluster
15:36 CyrilPeponnet with (,inode64,logbsize=64k,sunit=128,swidth=1280,noquota) as options. This is a hardware raid 6 of 12x2TB
15:36 JoeJulian I'm not sure how to read it, but paste up the profile info and I'll see what I can see.
15:36 CyrilPeponnet I can paste everything you need because I spend 3 days trying to find solenthing relevant here with no luck
15:37 JoeJulian Yeah. I'm sorry you're having so much trouble. I hope I can help.
15:37 CyrilPeponnet I even unplugged half of our vms but this is the same load average / usage. (I even try to restart gluster and kill glusterfshd
15:37 CyrilPeponnet :)
15:37 CyrilPeponnet hold on profiles are comming
15:38 JoeJulian It's all replicated, right?
15:38 CyrilPeponnet yes
15:38 CyrilPeponnet no all
15:39 squizzi_ joined #gluster
15:39 CyrilPeponnet 1 vol is replica 2, 2 others are not (because they are hosting qcow and I want to maximise the throughput for write)
15:39 JoeJulian I was going to ask what happens if you pull the plug on the network, but that will cause your clients to pause for ping-timeout and may make your VMs go RO.
15:39 CyrilPeponnet Well I've done that by stopping the volume
15:40 CyrilPeponnet it goes down but as soon as I restart it it goes crasy
15:40 JoeJulian Right, but I want to see if the glusterfsd continues to spin even without network traffic.
15:40 anil joined #gluster
15:40 JoeJulian If so, we could probably figure out where with gdb.
15:41 CyrilPeponnet is there a way to isolate which volume is acting as crazy ? because glusterfsd holding lot of thread but not sure how to identify which one is dealing with which vol
15:41 CyrilPeponnet let me starts with the metrics
15:42 CyrilPeponnet any chance you are in California ? :p
15:42 JoeJulian The complete glusterfsd command line shows which brick it's hosting.
15:42 JoeJulian And no, I'm in Seattle.
15:42 CyrilPeponnet How great I'll be relocated to Vancouver BC soon not too far :p
15:43 JoeJulian Oooh, that's a much nicer place to live in my opinion.
15:43 bowhunter joined #gluster
15:43 CyrilPeponnet I'll see not really a choice of my own... thanks US immigration :)
15:44 JoeJulian Ah, one of /those/ moves. 6 months?
15:44 overclk joined #gluster
15:45 CyrilPeponnet yeah... between 6 and 12
15:47 CyrilPeponnet (i'm creating a gist wit some info right now)
15:47 chirino joined #gluster
15:47 JoeJulian cool, I'm trying to figure out why this salt provisioning isn't doing what it's supposed to.
15:50 CyrilPeponnet @JoeJulian https://gist.github.com/CyrilPeponnet/ec1bcc743ac5a84debe8 here it is
15:50 CyrilPeponnet I forgot to mention that I have around 700 clients using gfs-guse
15:51 CyrilPeponnet the node 1 is holding a vip but is not so loaded as node2
15:52 bdiehr Joe: Just incase you were curious, my issue was that I was referring to the path instead of the volume name. Thanks for the help once again! Slowly making my way through this
15:53 JoeJulian Ah yes. Common mistake.
15:56 CyrilPeponnet @JoeJulian  let me know if you need more info
15:57 thoht hi
15:57 glusterbot thoht: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
15:58 thoht i ve a split-brain on a specific file and i do want to fix tat
15:58 JoeJulian @split-brain
15:58 glusterbot JoeJulian: To heal split-brains, see https://gluster.readthedocs.org/en/release-3.7.0/Features/heal-info-and-split-brain-resolution/ For additional information, see this older article https://joejulian.name/blog/fixing-split-brain-with-glusterfs-33/ Also see splitmount https://joejulian.name/blog/glusterfs-split-brain-recovery-made-easy/
15:58 thoht i tried to use the method bigger-file but it tells: No bigger file.
16:00 thoht JoeJulian: i m already on this page but there is no resolution in case of a file where the file size are identical
16:00 CyrilPeponnet @thoht if you know the file then delete the faulty one from the brick. Gluster will then heal the file using the "good" version of it remaining on the other brick
16:01 thoht CyrilPeponnet: yes i want to keep the file from node2. so i can remove the file from node1 that s correct ?
16:01 CyrilPeponnet what give getfattr -d -m . -e hex path to your file on each brick ?
16:01 CyrilPeponnet yep
16:02 thoht CyrilPeponnet: this is the  output http://pastebin.com/EyUSMPK9
16:02 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
16:03 thoht CyrilPeponnet: to remove the file; should i stop gluster on both node and remove the file from the brick itself or should i mount locally (mount -t glusterfs) and remove the file ?
16:03 CyrilPeponnet just remove the faulty one from the brick
16:03 CyrilPeponnet no need to stop the vol
16:04 thoht CyrilPeponnet: ok i removed it on node1
16:04 thoht i still see the split brain info; should i perform a heal now ?
16:04 CyrilPeponnet then for the heal by doing something like file /path/tofile in mount point
16:05 CyrilPeponnet yep
16:06 thoht CyrilPeponnet: ok i did it but it doesn't change; still the error
16:06 CyrilPeponnet did the file get copied ?
16:07 thoht nope
16:07 dlambrig_ joined #gluster
16:07 CyrilPeponnet try gluster volume heal <VOLNAME> split-brain source-brick <HOSTNAME:BRICKNAME> <FILE>
16:07 CyrilPeponnet <HOSTNAME:BRICKNAME>
16:07 thoht http://fpaste.org/275981/23403314/
16:07 glusterbot Title: #275981 Fedora Project Pastebin (at fpaste.org)
16:07 thoht ok
16:07 CyrilPeponnet with good one
16:07 JoeJulian CyrilPeponnet: None of the cores are 100%, so it's not an endless loop in the code. You've configured 24 io-threads which may or may not be an issue depending on your raid controller (io threads, not cpu cores). But the one thing that seems really telling is that mvdcgluster02 is brick1. I'm betting it's getting all the actual load. Try setting cluster.read-hash-mode 2 on that volume and see what happens.
16:08 CyrilPeponnet @JoeJulian what is this setting ?
16:08 CyrilPeponnet I can reset the io-thread to default
16:08 JoeJulian Description: inode-read fops happen only on one of the bricks in replicate. AFR will prefer the one computed using the method specified using this option0 = first up server, 1 = hash by GFID of file (all clients use same subvolume), 2 = hash by GFID of file and client PID
16:09 thoht CyrilPeponnet: it worked but with an error: http://fpaste.org/275985/44234161/
16:09 glusterbot Title: #275985 Fedora Project Pastebin (at fpaste.org)
16:09 thoht but no split brain anymore !
16:10 CyrilPeponnet \o/
16:10 thoht CyrilPeponnet: in fact there is still an entry; can you check the last pastbin and tell me if i should worry about it ?
16:10 thoht it is not said split brain
16:10 thoht it is just saying there is a missing file i guess
16:11 CyrilPeponnet @JoeJulian Yes I was also wondering if node2 was taking all the load for read operation on my replica 2. I even try to add a third brick to this volume but this was worse so I removed it.
16:11 thoht and i can see it on both node now
16:11 JoeJulian thoht: It should heal itself and clear
16:11 thoht but the message still remains
16:12 CyrilPeponnet @thoht looks good to me, try access the file through mountpoint
16:12 CyrilPeponnet this will be healed, give it some time
16:12 thoht ok after a new heal; i got no error anymore
16:12 CyrilPeponnet @JoeJulian let me try this option
16:12 thoht thanks a lot !!
16:13 CyrilPeponnet @JoeJulian this is a cluster option so we don't care on which volume I apply it right ?
16:13 bdiehr If you mount a glusterFS by specifying a specific gluster peer to connect to, what's the best way to load balance between the peers?
16:13 JoeJulian CyrilPeponnet: I thought it was volume specific.
16:13 JoeJulian It must be, actually.
16:13 CyrilPeponnet ok so I should apply this to my replica 2 vol right
16:14 CyrilPeponnet no need to apply this to non replicated / distributed volume? @JoeJulian
16:16 CyrilPeponnet Ok reseting performance.io-thread-count on each vol and applying cluster.read-hash-mode 2 on my replica 2
16:16 JoeJulian right
16:16 CyrilPeponnet cross your fingers
16:19 k-ma what depencies was wheezy lacking to get 3.7? tried to compare jessie 3.7 -> wheezy 3.6 debian/control and didn't spot any
16:19 _Bryan_ joined #gluster
16:21 Pupeno joined #gluster
16:21 CyrilPeponnet @JoeJulian to underwent cluster.read-hash-mode 2 - by default in replicate, all read are done on one brick with mode 2 not sure what it do
16:25 CyrilPeponnet with  gluster vol status  usr_global clients  looks like clients are better spread (half and half)
16:25 CyrilPeponnet *to understant
16:26 Pupeno_ joined #gluster
16:27 Pupeno_ joined #gluster
16:27 cholcombe joined #gluster
16:27 JoeJulian k-ma: My understanding is that there are shared library functions that don't have the necessary function.
16:29 JoeJulian CyrilPeponnet: Right. It takes a hash of the filename adds the pid of the client and divides it by the number of bricks. Takes the remainder and uses that to pick a brick to read from.
16:30 Pupeno joined #gluster
16:30 CyrilPeponnet So now it could be good to add a third brick right ?
16:30 k-ma JoeJulian: ok, i guess the dependency list just doesn't have proper versioning for the packages
16:30 k-ma http://paste.fedoraproject.org/275995/35397144/
16:30 glusterbot Title: #275995 Fedora Project Pastebin (at paste.fedoraproject.org)
16:30 CyrilPeponnet this will divide the load by 3
16:30 k-ma might give it a go anyway :)
16:31 JoeJulian CyrilPeponnet: Seems possible. Writes and directory creations will still happen everywhere and you seem to have a lot of those.
16:31 JoeJulian CyrilPeponnet: If you add two more servers (replica 2 still) you would spread the write load.
16:32 CyrilPeponnet that's planned
16:32 CyrilPeponnet but 2 more server to replicated what
16:33 JoeJulian Add 2 servers to usr_global, since that's the one that seems to be overloaded.
16:33 CyrilPeponnet this is something I don't really get... if I add more server and I want to use a replicated model, the replica number will be the number of server right ?
16:33 JoeJulian No
16:33 CyrilPeponnet so I can create a replica 2
16:33 CyrilPeponnet with 4 bricks
16:34 CyrilPeponnet how does it works? I think I'm missing a fundamental thing here
16:34 JoeJulian You have a replica 2 volume with 2 bricks. So now all write traffic goes to both bricks. If you add two more bricks, you'll have a distributed replicated volume. The file names will be distributed so your writes will only go to two of the four bricks.
16:34 gem joined #gluster
16:35 JoeJulian Since you'll be writing to multiple files, your writes will effectively be split among two replicated pairs of bricks, halving the write io to each pair.
16:38 JoeJulian The method for adding your two new bricks is "gluster volume add-brick usr_global serverx:/brick_path servery:/brick_path". Not the absence of "replica N" because we're not changing it from "replica 2".
16:38 kovshenin joined #gluster
16:38 JoeJulian s/Not /Note /
16:38 glusterbot What JoeJulian meant to say was: The method for adding your two new bricks is "gluster volume add-brick usr_global serverx:/brick_path servery:/brick_path". Note the absence of "replica N" because we're not changing it from "replica 2".
16:43 coredump joined #gluster
16:45 hagarth joined #gluster
16:51 shyam left #gluster
16:53 CyrilPeponnet @JoeJulian ok so adding bricks without setting replica will put then in distributed
16:54 dlambrig_ left #gluster
16:54 CyrilPeponnet is distributed better for read or write or both
16:55 kovsheni_ joined #gluster
16:56 JoeJulian If you have a single file that's read a ton, like if you're netflix and have extremely popular sets of files, having large numbers of replica will be the most useful. For most other workloads, replication is more about fault tolerance and distribution is better for handling load.
17:02 CyrilPeponnet make sense. I have a vol distributed to 2 nodes, if I loose one of the node of if I need to reboot it for upgrade, what will happen?
17:03 JoeJulian You won't have access to half your files until it comes back.
17:04 bdiehr Is there a gluster-specific way to load balance replicated volumes? Or is that something done entirely outside the realm of gluster?
17:06 bdiehr it looks like I should just do a round robin DNS entry?
17:12 rafi joined #gluster
17:13 ramky joined #gluster
17:21 kovshenin joined #gluster
17:23 JoeJulian ~mount server | bdiehr
17:23 glusterbot bdiehr: (#1) The server specified is only used to retrieve the client volume definition. Once connected, the client connects to all the servers in the volume. See also @rrdns, or (#2) One caveat is that the clients never learn of any other management peers. If the client cannot communicate with the mount server, that client will not learn of any volume changes.
17:24 CyrilPeponnet @JoeJulian looks like the load average is still the same :/
17:24 bdiehr @rrdns
17:24 JoeJulian CyrilPeponnet: Did the load at least go up on the other server?
17:24 glusterbot bdiehr: You can use rrdns to allow failover for mounting your volume. See Joe's tutorial: http://goo.gl/ktI6p
17:24 CyrilPeponnet not really...
17:24 bdiehr Thanks Joe, i'm going to go through your tutorial
17:24 JoeJulian CyrilPeponnet: I'm wondering if you just have a bad drive that's making everything slow.
17:25 JoeJulian bdiehr: point being, that's not load balancing, that's just ensuring you're able to mount with a management daemon that's down.
17:25 CyrilPeponnet I've checked but using megaraid it's hard to see anything
17:26 bdiehr My understanding is that there is no need for load balancing? Just ensure that you can get access to a volume and the rest is taken care of internally?
17:26 shyam joined #gluster
17:27 JoeJulian bdiehr: essentially, yes, though you might want to look at cluster.read-hash-mode.
17:27 bdiehr Will do, thanks
17:28 Manikandan joined #gluster
17:29 CyrilPeponnet @JoeJulian http://i.imgur.com/fG1PSKx.png
17:29 CyrilPeponnet I don't know what happened early week 30
17:29 CyrilPeponnet nothing changed
17:29 CyrilPeponnet I remember to play with cache performance
17:29 CyrilPeponnet cache size
17:30 CyrilPeponnet same but for the day: http://i.imgur.com/cxFkOvt.png
17:42 deniszh joined #gluster
17:48 shyam joined #gluster
17:52 maveric_amitc_ joined #gluster
17:53 Pupeno_ joined #gluster
18:31 ChrisNBlum joined #gluster
18:41 CyrilPeponnet @JoeJulian I checked the drives they are fine, the weirdest is that everything happened September 30th @ 11h40. At this time I try to play with read-cache but I reverted them later.
18:55 ramky joined #gluster
18:59 Manikandan joined #gluster
19:04 kayn joined #gluster
19:17 mhulsman joined #gluster
19:24 _maserati_ joined #gluster
19:25 jwaibel joined #gluster
19:26 mhulsman1 joined #gluster
19:29 moogyver joined #gluster
19:31 jdossey joined #gluster
19:34 theron joined #gluster
19:34 JM_ joined #gluster
19:42 Ru57y joined #gluster
19:47 devilspgd joined #gluster
19:47 semiosis joined #gluster
19:48 yosafbridge joined #gluster
19:50 squizzi__ joined #gluster
19:53 aaronott joined #gluster
19:55 mhulsman joined #gluster
20:07 firemanxbr joined #gluster
20:10 DV joined #gluster
20:18 arcolife joined #gluster
20:19 tg2w joined #gluster
20:19 tg2w anybody know what this stack trace crash looks like? http://fpaste.org/276104/44424907/
20:19 glusterbot Title: #276104 Fedora Project Pastebin (at fpaste.org)
20:19 tg2w glusterfs 3.6.2
20:26 timotheus1 joined #gluster
20:30 JoeJulian If I squint and turn my head sideways... a birthday cake.
20:32 JoeJulian that says "kernel" and it's referring to an IRQ. Looks like a driver issue to me.
20:34 ildefonso joined #gluster
20:36 theron joined #gluster
20:37 theron joined #gluster
20:41 mhulsman joined #gluster
20:55 juhaj joined #gluster
20:58 juhaj Why would one client cause "0-glusterd: Request received from non-privileged port. Failing request" but not the others? All clients are proper machines (i.e. no virtual machines), without firewalls, on the same subnet etc. Cannot think of any difference between working clients and non-working except versions: 3.5 vs 3.7. Is 3.7 not compatible with 3.5?
21:00 tsaavik joined #gluster
21:02 tsaavik hey all, quick question, will I run into (locking related) issues if I have a (newly rebuilt) node running 3.5.6 and trying to connect to 3.5.1
21:08 bowhunter joined #gluster
21:14 tsaavik ah, figured it out, I forgot to set the trusted.glusterfs.volume
21:23 JoeJulian juhaj: The restriction of using a port <= 1024 was changed. Instructions for solving that issue are in the 3.7 release notes: https://github.com/gluster/glusterfs/blob/release-3.7/doc/release-notes/3.7.0.md#known-issues
21:23 glusterbot Title: glusterfs/3.7.0.md at release-3.7 · gluster/glusterfs · GitHub (at github.com)
21:25 cholcombe joined #gluster
21:35 frozengeek joined #gluster
21:36 jdossey joined #gluster
21:38 frozengeek hej, does anybody know wheter there are plans to extend the snapshot functionality to btrfs? I'm currenly torn between btrfs and xfs on top of thinp. oe
21:39 frozengeek (scrap that "oe" at the  end, accidentally smashed my keyboard)
21:39 JoeJulian I know that's desired, but I don't know where it's at in the planning stages.
21:46 muneerse2 joined #gluster
21:47 theron_ joined #gluster
21:48 cholcombe joined #gluster
21:57 Pupeno joined #gluster
21:59 jamesc joined #gluster
22:05 deniszh joined #gluster
22:11 togdon joined #gluster
22:23 Trefex joined #gluster
22:23 Trefex hello. i have a question. i use a 3-server gluster setup with a gateway to export smb and others
22:23 juhaj JoeJulian: You mean this "The following configuration changes are necessary for qemu and samba integration with libgfapi to work seamlessly:"?
22:23 Trefex i want to run glusternfs on the gateway so that i can NFS mount on a client directly
22:23 Trefex is this possible?
22:24 juhaj JoeJulian: I had already found that, but not any mention that it also applies to non-qemu and non-samba cases.
22:28 JoeJulian Yeah, you're not the first one to find that surprise.
22:30 JoeJulian Trefex: not sure what "the gateway" is, but if you join a computer to the trusted peer group, it will run the nfs service.
22:30 Trefex JoeJulian: it's a gateway node not part of the storage, but part of the gluster peer yes, i found it didn't start because native NFS was running
22:30 Trefex now i get a "no route to host" error, which I'm trying to figure out
22:33 Trefex which was due to firewall
22:33 Trefex it's always magic how explaining problems in IRC will make you find the solution that much faster :S
22:34 cabillman joined #gluster
22:43 skylar1 joined #gluster
22:43 juhaj JoeJulian: Thanks.
22:50 JoeJulian Trefex: I know, right! :D
22:50 Trefex ye ^^
22:50 Trefex transferring data like a boss now
22:51 Trefex seems to be much quicker than gluster-fuse
22:55 Trefex JoeJulian: ^^
22:57 JoeJulian No
22:57 Trefex no?
22:58 JoeJulian Actual tranfer rates will be slower and it won't handle as many simultaneous threads as quickly. The kernel, however does cache attribute lookups so metadata operations will be faster.
22:58 Trefex well i'm transferring rsync including many small files
22:58 JoeJulian As long as you don't need current metadata that may benefit your use case.
22:58 Trefex and it seems that sending to rsyncd on gluster-fuse is much slower
22:59 JoeJulian Ah, rsync is inefficient.
22:59 Trefex than doing rsync to nfs-mounted
22:59 Trefex well what else i can use ? i have to migrate my data somehow
22:59 JoeJulian Even worse, unless you're using --inplace, you're creating temporary filenames which will has out to the wrong brick after they're renamed which will cause inefficient lookups later.
22:59 JoeJulian s/has out/hash out/
22:59 glusterbot What JoeJulian meant to say was: Even worse, unless you're using --inplace, you're creating temporary filenames which will hash out to the wrong brick after they're renamed which will cause inefficient lookups later.
23:00 Trefex i'm using whole file and inplace
23:00 Trefex but do you know any alterntive?
23:00 JoeJulian cpio
23:00 Trefex cpio to fuse?
23:01 theron joined #gluster
23:01 JoeJulian "cpio | pv | netcat" has been my preference since I tested it.
23:01 JoeJulian Assuming you need to transfer over a network.
23:01 Trefex i do
23:01 JoeJulian Do that to a fuse mount on the server and that's about the fastest you can get.
23:01 dlambrig_ joined #gluster
23:02 Trefex do you have any doc on cpio pv netcat? i'm not familiar with either of those tools, nevermind setting it up as client/server thing
23:03 JoeJulian Hrm... no. cpio and netcat are super well known tools, pv it just nice to see throughput. I'll see if I can write something up though. Maybe tonight.
23:04 togdon joined #gluster
23:05 Trefex thanks that would be ever so helpful
23:05 Trefex this would not check for existing files right?
23:05 Trefex eg not suited for incremental stuff
23:07 skoduri joined #gluster
23:14 jdossey joined #gluster
23:16 gildub joined #gluster
23:18 JoeJulian right

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary