Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-11-24

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:02 zhangjn joined #gluster
00:08 plarsen joined #gluster
00:09 EinstCrazy joined #gluster
00:13 ira joined #gluster
00:26 mlhamburg joined #gluster
00:34 DV joined #gluster
00:37 dblack joined #gluster
00:47 gildub joined #gluster
00:52 srsc joined #gluster
00:52 srsc joined #gluster
00:55 Telsin joined #gluster
00:56 srsc i'm trying to set up some performance benchmarking where multiple gluster clients read files in parallel. ideally each client should read a file from a different replicate pair in the distributed-replicate volume. outside of creating randomly named files and manually checking on the gluster servers to curate a list of test files, is there a way to create this scenario? i.e. a way to make sure that a file gets put on a specific replicate pair/predict DHT?
01:00 zhangjn joined #gluster
01:00 EinstCrazy joined #gluster
01:01 zhangjn joined #gluster
01:02 EinstCra_ joined #gluster
01:07 lanning the mapping for the DHT is different for every directory
01:08 JoeJulian there's a special naming convention for dht to force subvolumes...
01:12 srsc that sounds perfect. can't seem to raise it in google searches so far though.
01:13 JoeJulian yeah, I even know about it and I'm having trouble finding it.
01:20 zhangjn joined #gluster
01:21 B21956 left #gluster
01:27 zhangjn_ joined #gluster
01:28 JoeJulian https://lists.gnu.org/archive/html/​gluster-devel/2013-04/msg00013.html
01:28 glusterbot Title: Re: [Gluster-devel] [Gluster-users] GlusterFS and optimization of locali (at lists.gnu.org)
01:36 Lee1092 joined #gluster
01:43 julim joined #gluster
01:46 srsc brilliant, works like a charm. my vol is named main, so i did things like "echo test > test0@main-dht:main-replicate-0", etc
01:46 srsc thanks much JoeJulian
01:59 skylar joined #gluster
02:00 7GHABPWZA joined #gluster
02:01 haomaiwang joined #gluster
02:02 nangthang joined #gluster
02:05 gbox joined #gluster
02:06 gbox Hi I need to take a node offline for a hardware fix.  If possible should I stop the volume that node hosts?
02:18 Mr_Psmith joined #gluster
02:24 gem joined #gluster
02:27 JoeJulian gbox: you can just stop glusterd and pkill glusterfsd
02:29 gbox JoeJulian: Sure.  So stopping the whole volume is probably overkill.  Thanks!
02:49 CP|AFK joined #gluster
03:01 haomaiwa_ joined #gluster
03:02 atinm joined #gluster
03:06 Merlin__ joined #gluster
03:06 Intensity joined #gluster
03:09 Peppaq joined #gluster
03:32 rejy joined #gluster
03:48 bharata-rao joined #gluster
03:48 sakshi joined #gluster
03:50 aravindavk joined #gluster
03:51 chirino joined #gluster
03:58 DV joined #gluster
03:59 overclk joined #gluster
04:01 haomaiwang joined #gluster
04:02 shubhendu joined #gluster
04:12 itisravi joined #gluster
04:16 gildub joined #gluster
04:16 ramteid joined #gluster
04:18 [7] joined #gluster
04:18 nishanth joined #gluster
04:22 zhangjn joined #gluster
04:29 rafi joined #gluster
04:30 gem joined #gluster
04:31 jiffin joined #gluster
04:35 ppai joined #gluster
04:39 hgowtham joined #gluster
04:43 neha_ joined #gluster
04:51 vimal joined #gluster
04:56 kotreshhr joined #gluster
04:57 hgowtham joined #gluster
04:58 hos7ein joined #gluster
04:58 skylar1 joined #gluster
04:59 RameshN joined #gluster
05:01 haomaiwa_ joined #gluster
05:07 skoduri joined #gluster
05:07 ashiq joined #gluster
05:09 Manikandan joined #gluster
05:10 calavera joined #gluster
05:17 ndarshan joined #gluster
05:20 RameshN joined #gluster
05:21 deepakcs joined #gluster
05:21 Manikandan joined #gluster
05:23 zhangjn joined #gluster
05:26 haomaiwang joined #gluster
05:30 anil_ joined #gluster
05:32 pppp joined #gluster
05:33 Bhaskarakiran joined #gluster
05:37 EinstCrazy joined #gluster
05:38 nehar joined #gluster
05:41 poornimag joined #gluster
05:43 nbalacha joined #gluster
05:43 deepakcs joined #gluster
05:49 pcaruana joined #gluster
05:54 Apeksha joined #gluster
06:00 skoduri joined #gluster
06:01 haomaiwang joined #gluster
06:06 kshlm joined #gluster
06:11 rjoseph joined #gluster
06:16 chirino joined #gluster
06:19 atalur joined #gluster
06:21 vmallika joined #gluster
06:23 SOLDIERz joined #gluster
06:24 Saravana_ joined #gluster
06:26 ppai joined #gluster
06:29 Park joined #gluster
06:38 ramky joined #gluster
06:39 harish_ joined #gluster
06:39 hgowtham joined #gluster
06:39 harish_ joined #gluster
06:43 deepakcs joined #gluster
06:43 kovshenin joined #gluster
06:45 spalai joined #gluster
06:47 ayma joined #gluster
06:49 EinstCrazy joined #gluster
06:51 partner joined #gluster
07:00 hgowtham joined #gluster
07:00 ayma1 joined #gluster
07:00 mlhamburg1 joined #gluster
07:01 haomaiwa_ joined #gluster
07:04 nehar joined #gluster
07:06 jwd joined #gluster
07:10 Philambdo joined #gluster
07:11 gem joined #gluster
07:19 jtux joined #gluster
07:23 mobaer joined #gluster
07:25 armyriad joined #gluster
07:31 Merlin__ joined #gluster
07:37 mhulsman joined #gluster
07:40 Humble joined #gluster
07:41 mhulsman1 joined #gluster
07:48 chirino joined #gluster
07:54 mbukatov joined #gluster
08:00 dusmant joined #gluster
08:01 haomaiwang joined #gluster
08:03 creshal joined #gluster
08:07 mhulsman joined #gluster
08:12 Akee joined #gluster
08:13 mhulsman1 joined #gluster
08:15 ivan_rossi joined #gluster
08:21 Norky joined #gluster
08:25 [Enrico] joined #gluster
08:37 fsimonce joined #gluster
09:01 haomaiwang joined #gluster
09:07 atrius joined #gluster
09:14 harish_ joined #gluster
09:14 arcolife joined #gluster
09:18 Bardack ahoi
09:19 Bardack is the redhat repo RHEL Gluster Storage 3.1 Console ok? i m using katello, able to sync all repos  but not that one …
09:23 ppai joined #gluster
09:29 suliba joined #gluster
09:47 morse joined #gluster
09:52 arcolife joined #gluster
09:54 shubhendu joined #gluster
10:01 haomaiwang joined #gluster
10:04 RameshN Bardack, what is the exact issue with that repo?
10:06 Bardack well, katello complains there is no filelists.xml file
10:06 Bardack so the sync fails
10:13 Saravana_ joined #gluster
10:15 MrAbaddon joined #gluster
10:38 Norky joined #gluster
10:40 nishanth joined #gluster
10:42 creshal What logging do I have to configure to get anything actually useful? At some random point gluster just starts to spam "E [socket.c:2501:socket_poller] 0-socket.management: error in polling loop" and I can't do anything until I repeatedly restarted gluster on all nodes.
10:42 Akee joined #gluster
10:43 zhuchkov joined #gluster
10:43 creshal Or reboot, because sometimes gluster mounts decide to hang indefinitely even after a restart.
10:44 Saravana_ joined #gluster
10:47 srsc joined #gluster
10:48 atrius_ joined #gluster
10:49 creshal https://gist.githubusercontent.com/cresha​l/d19c0dbe07598d7969fa/raw/45076404317db1​27d3aab66ab5a851d4e09fef2c/gistfile1.txt What the hell did break now?
10:51 RameshN Bardack, I am not sure what is causing this issue.
10:51 RameshN Bardack, U are using CDN  or RHN?
10:55 shubhendu joined #gluster
10:57 Norky joined #gluster
11:00 bluenemo joined #gluster
11:00 creshal https://www.gluster.org/pipermail/gl​uster-users/2015-August/023322.html Same problem as here, it seems. Secondary node sees both as connected, primary nodes sees secondary as disconnected. How do I unfuck this?
11:00 glusterbot Title: [Gluster-users] Gluster peer status disconnected on node where glusterd has been started first (at www.gluster.org)
11:01 haomaiwa_ joined #gluster
11:02 jiffin REMINDER: Gluster Bug Triage meeting starts approximately in 1 hour in #gluster-meeting
11:06 Bardack RameshN: well, using katello which uses rhn afaik
11:08 zoldar what is the right way to set/reset global options, like cluster.server-quorum-ratio ?
11:08 EinstCrazy joined #gluster
11:09 EinstCrazy joined #gluster
11:09 kkeithley1 joined #gluster
11:15 zoldar ok, got it, have to execute for all volumes at once
11:17 atalur joined #gluster
11:25 shyam joined #gluster
11:30 skoduri joined #gluster
11:31 overclk joined #gluster
11:34 skoduri_ joined #gluster
11:34 ppai joined #gluster
11:37 dusmant joined #gluster
11:39 mhulsman joined #gluster
11:41 mhulsman1 joined #gluster
11:42 creshal https://www.gluster.org/pipermail/gl​uster-users/2015-August/023357.html I'm seeing that problem, too. Is SSL support just that broken in Gluster, or is that coincidental to the split-brain issues I'm seeing?
11:42 glusterbot Title: [Gluster-users] gluster volume heal info failed when use ssl\tls (at www.gluster.org)
11:50 surabhi joined #gluster
11:50 ctria joined #gluster
11:56 ppai joined #gluster
11:57 nbalacha joined #gluster
12:00 jiffin REMINDER: Gluster Bug Triage meeting started
12:01 haomaiwang joined #gluster
12:03 rafi1 joined #gluster
12:03 kaushal_ joined #gluster
12:06 ndarshan joined #gluster
12:09 jrm16020 joined #gluster
12:17 kotreshhr left #gluster
12:18 morse joined #gluster
12:18 zhangjn joined #gluster
12:18 Bhaskarakiran joined #gluster
12:19 zhangjn joined #gluster
12:20 zhangjn joined #gluster
12:21 zhangjn joined #gluster
12:22 TvL2386 joined #gluster
12:22 zhangjn joined #gluster
12:28 EinstCrazy joined #gluster
12:28 zhangjn joined #gluster
12:30 jrm16020 joined #gluster
12:33 zhangjn joined #gluster
12:34 atalur joined #gluster
12:35 EinstCra_ joined #gluster
12:40 diegows joined #gluster
12:45 Mr_Psmith joined #gluster
12:54 nishanth joined #gluster
13:00 dusmant joined #gluster
13:00 shubhendu joined #gluster
13:01 64MAED48W joined #gluster
13:03 rafi joined #gluster
13:08 surabhi joined #gluster
13:08 Slashman joined #gluster
13:12 haomaiwa_ joined #gluster
13:14 wolsen joined #gluster
13:19 poornimag joined #gluster
13:19 rjoseph joined #gluster
13:36 Merlin__ joined #gluster
13:37 ahino joined #gluster
13:39 rjoseph joined #gluster
13:44 atinm joined #gluster
13:49 haomaiwa_ joined #gluster
13:53 unclemarc joined #gluster
13:57 chirino joined #gluster
13:58 nbalacha joined #gluster
14:00 chirino joined #gluster
14:01 haomaiwang joined #gluster
14:03 Gill joined #gluster
14:05 shyam joined #gluster
14:07 nbalacha joined #gluster
14:11 dlambrig_ joined #gluster
14:13 B21956 joined #gluster
14:14 Ru57y joined #gluster
14:16 mobaer joined #gluster
14:16 cristov joined #gluster
14:19 kotreshhr joined #gluster
14:19 kotreshhr left #gluster
14:21 kotreshhr joined #gluster
14:21 kotreshhr left #gluster
14:21 Park joined #gluster
14:27 shubhendu_ joined #gluster
14:30 dusmant joined #gluster
14:32 skylar joined #gluster
14:35 jdarcy joined #gluster
14:35 ToMiles joined #gluster
14:36 mlncn joined #gluster
14:37 rjoseph joined #gluster
14:37 shubhendu__ joined #gluster
14:38 jiffin joined #gluster
14:42 ToMiles JoeJulian: Full heal issue we discussed yesterday has been added to bugzilla https://bugzilla.redhat.co​m/show_bug.cgi?id=1284863
14:42 glusterbot Bug 1284863: medium, unspecified, ---, bugs, NEW , Full heal of volume fails on some nodes "Commit failed on X", and glustershd logs "Couldn't get xlator xl-0"
14:46 spalai left #gluster
14:50 chirino joined #gluster
14:58 ctria joined #gluster
15:01 haomaiwang joined #gluster
15:11 mhulsman joined #gluster
15:12 calavera joined #gluster
15:13 mhulsman1 joined #gluster
15:14 plarsen joined #gluster
15:15 dgandhi joined #gluster
15:20 ctria joined #gluster
15:30 skoduri joined #gluster
15:36 josh__ is there an option to avoid the prompt when deactivating a snapshot?
15:37 Merlin__ joined #gluster
15:42 hagarth_ josh__: have you tried running the command with --mode=script ?
15:43 Merlin__ joined #gluster
15:46 rafi joined #gluster
15:47 kovshenin joined #gluster
15:48 josh__ hagarth: i wasn't aware of that option, but it solved my problem.  thanks.
16:00 rafi joined #gluster
16:06 maserati joined #gluster
16:10 Merlin__ joined #gluster
16:11 cholcombe joined #gluster
16:13 wushudoin joined #gluster
16:18 dcmbrown So I mentioned the other day on here that I was trying to add a second GFS volume to a host temporarily to replace the original GFS volume but was running into the error caused by both hosts having the same UUID.  The "one of the bricks contain the other" error.
16:18 glusterbot dcmbrown: Check that your peers have different UUIDs ('gluster peer status' on both). The uuid is saved in /var/lib/glusterfs/glusterd.info - that file should not exist before starting glusterd the first time. It's a common issue when servers are cloned. You can delete the /var/lib/glusterfs/peers/<uuid> file and /var/lib/glusterfs/glusterd.info, restart glusterd and peer-probe again.
16:20 wushudoin joined #gluster
16:20 dcmbrown Shutting down glusterd on one of the hosts and changing it's UUID via the solution glusterbot mentions here, then manually editing /var/lib/glusterd/peer/<uuid> file and restarting glusterd on the host(s) still running (it's only a two server setup atm) appears to have no detrimental consequences (eg. a permanent split brain).
16:21 dcmbrown Although this is on a fairly unchanging gfs file share so results may vary.
16:26 rastar joined #gluster
16:27 lalatenduM joined #gluster
16:29 Merlin__ joined #gluster
16:40 calavera joined #gluster
16:40 kotreshhr joined #gluster
16:40 kotreshhr left #gluster
16:41 Gill joined #gluster
16:51 EinstCrazy joined #gluster
16:53 CyrilPeponnet joined #gluster
16:56 Manikandan joined #gluster
16:59 CyrilPeponnet joined #gluster
17:04 CyrilPeponnet Hey guys, I have a lot of trouble with my gluster volumes hosting qcow2 for vms in replica 3. Basically when a node goes down, it's fine, but when it goes up again, all the vm are hanging, basically qcow2 are kind of locked during the heal and it takes like 15 min to get one vm back.
17:06 CyrilPeponnet I ended by remove all the replica bricks to make it boot (healing was not going anywhere).
17:07 CyrilPeponnet AFAIK this should work (mean not locking) the qcow2
17:07 CyrilPeponnet Gluster 3.6.5
17:07 jccnd joined #gluster
17:11 Trefex joined #gluster
17:17 chirino joined #gluster
17:19 Merlin__ joined #gluster
17:21 Merlin__ joined #gluster
17:24 josh__ if i want to replace a brick in a distributed volume, will it move the data if i just add-brick newbrick and remove-brick oldbrick?
17:26 virusuy joined #gluster
17:27 msvbhat josh__: Yes, It should work
17:28 josh__ msvbhat: thanks.  i assumed so, but wanted to check
17:29 JoeJulian josh__: Why not just replace-brick?
17:30 ChrisNBlum joined #gluster
17:30 ivan_rossi left #gluster
17:30 josh__ i tried that, but it didn't work.  i am just testing.  it is a volume with a single brick.
17:30 msvbhat replace-brick is deprecated. The latest version just has the force option which doesn't move the data
17:30 calavera joined #gluster
17:31 CyrilPeponnet @JoeJulian ? ^^ :)
17:31 * JoeJulian shakes his head...
17:32 CyrilPeponnet Is replica with 3.6.5 supposed to not lock files when healing? because hosting vms on a replicated setup doesn't work well when a node goes down / then up.
17:33 vimal joined #gluster
17:33 CyrilPeponnet I've seen some persons with the same issue on the ml
17:33 JoeJulian It's a granular lock. Has been since 3.2.
17:33 CyrilPeponnet even a file /mnt/path/to/qcow2 was taking like 15 minutes
17:33 CyrilPeponnet using gluster fuse
17:34 CyrilPeponnet well sounds not so granular to me :/
17:35 JoeJulian Check logs, cpu usage, io usage... something's obviously wrong, but I haven't had it happen so I haven't diagnosed it myself.
17:35 CyrilPeponnet after one hour of healing (due to a switch reboot that shutdown the network of one of our node for 1 minutes), I end up removing the brick because qcow2 where unreachable (hanging).
17:36 Rapture joined #gluster
17:36 CyrilPeponnet cpu was pretty hight and disk usage too as it's a production environement. Nothing in the logs...
17:36 JoeJulian I forget who mentioned something about lookup saying it was being set to the lowest priority. I didn't get to see that log message though.
17:42 arcolife joined #gluster
17:46 arcolife joined #gluster
17:53 jwd joined #gluster
18:05 CyrilPeponnet It's pretty easy to reproduce actually create a vol replica 2, start a vm doing nothing with qcow2 hosted on gluster mount. reboot a node. The vm will hang, remount as RO. If you try to reboot it libvirt will hang too waiting for the file to be healed
18:06 JoeJulian That doesn't happen for me.
18:06 JoeJulian Also, are you using fuse, or libgfapi?
18:11 tomatto joined #gluster
18:17 kotreshhr joined #gluster
18:17 kotreshhr left #gluster
18:17 rafi joined #gluster
18:19 calavera joined #gluster
18:21 mlncn joined #gluster
18:33 Humble joined #gluster
18:35 ramky joined #gluster
18:38 pff joined #gluster
18:39 maserati|work joined #gluster
18:40 kotreshhr joined #gluster
18:40 kotreshhr left #gluster
18:46 kotreshhr joined #gluster
18:46 kotreshhr left #gluster
18:50 srsc joined #gluster
18:52 calavera joined #gluster
18:55 shaunm joined #gluster
19:00 CyrilPeponnet @JoeJulian fuse
19:01 mlncn joined #gluster
19:04 JoeJulian yep, I definitely don't ever have that problem.
19:09 srsc joined #gluster
19:12 CyrilPeponnet zWell we got two outages. First one, the switch rebooted so we loose a replica for 1 min, when it get back, all vm stuck and remounts ro (with kernel tasks hang regarding libvirt and gluster). I removed this brick, vms got back to business. Yesterday I notice that one of the 2 remaing bricks was down (in gluster vol status). So I gluster start vol force. All vms hangs.... I had to remove the brick (which was down).
19:16 jockek joined #gluster
19:17 kotreshhr joined #gluster
19:17 kotreshhr left #gluster
19:17 B21956 joined #gluster
19:20 jockek joined #gluster
19:29 maserati|work joined #gluster
19:30 samppah_ @debian
19:30 glusterbot samppah_: I do not know about 'debian', but I do know about these similar topics: 'files edited with vim become unreadable on other clients with debian 5.0.6'
19:31 JoeJulian @ppa
19:31 glusterbot JoeJulian: The official glusterfs packages for Ubuntu are available here: 3.5: http://goo.gl/6HBwKh 3.6: http://goo.gl/XyYImN 3.7: https://goo.gl/aAJEN5 -- See more PPAs for QEMU with GlusterFS support, and GlusterFS QA releases at https://launchpad.net/~gluster -- contact semiosis with feedback
19:31 JoeJulian maybe?
19:33 samppah_ thanks JoeJulian, there seems to be debian packages available at http://download.gluster.org/pub/gl​uster/glusterfs/3.7/LATEST/Debian/
19:33 glusterbot Title: Index of /pub/gluster/glusterfs/3.7/LATEST/Debian (at download.gluster.org)
19:42 Merlin__ joined #gluster
19:43 jobewan joined #gluster
19:46 srsc i'm firing off a bunch of large (10GB) reads in parallel from a set of clients to a distributed replicate volume. i'm seeing that all traffic goes to only one of the servers in each replicate pair. as i understand it the read is routed to the first server to answer the request, but it seems a little strange that one server is each pair is always faster than the other?
19:47 chirino joined #gluster
19:52 maserati|work joined #gluster
19:58 nathwill joined #gluster
20:12 shaunm joined #gluster
20:23 maserati|work joined #gluster
20:31 dblack joined #gluster
20:45 [7] left #gluster
20:45 calavera joined #gluster
20:55 bencc1 joined #gluster
21:01 papamoose2 joined #gluster
21:06 EinstCrazy joined #gluster
21:19 Merlin__ joined #gluster
21:27 chirino joined #gluster
21:34 maserati|work joined #gluster
21:40 jarrpa joined #gluster
21:46 jarrpa left #gluster
22:19 MrAbaddon joined #gluster
22:23 maserati|work joined #gluster
22:42 Merlin__ joined #gluster
22:49 Gill_ joined #gluster
22:53 maserati|work joined #gluster
22:58 ricksebak joined #gluster
23:02 ricksebak Does it make sense to store virtualized hard disks (vmdk, etc) on glusterfs? I'm looking for something where if a physical server dies, I can have the virtual hard drive already replicated to another physical computer and I can then start the VM from that other computer?
23:05 skylar joined #gluster
23:05 RedW joined #gluster
23:05 DV joined #gluster
23:06 jrm16020 joined #gluster
23:09 JoeJulian ricksebak: A lot of us do.
23:09 JoeJulian Plus, you can do live migration.
23:09 ricksebak JoeJulian: perfect. Thanks!
23:11 MrAbaddon joined #gluster
23:14 dlambrig_ joined #gluster
23:26 delhage joined #gluster
23:36 F2Knight joined #gluster
23:37 srsc is all traffic going to one server in each replicate pair the expected behavior? if the first server to answer each request gets the traffic, i would expect it to be more or less evenly distributed
23:38 srsc maybe tiny variations in hardware or network config make one server always slightly faster than the other? does anyone else see this in the wild?
23:40 gildub joined #gluster
23:42 shaunm joined #gluster
23:44 delhage joined #gluster
23:44 srsc oh, i see stuff in the mailing list about files being consistently read from the same brick: https://www.gluster.org/pipermail/g​luster-users/2015-April/021497.html
23:44 glusterbot Title: [Gluster-users] Question on file reads from a distributed replicated volume (at www.gluster.org)
23:54 delhage joined #gluster
23:56 EinstCrazy joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary