Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-07-28

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:11 sc0001 joined #gluster
00:13 smohan joined #gluster
00:24 mikedep333 joined #gluster
00:29 coredump joined #gluster
00:55 B21956 joined #gluster
00:58 shaunm_ joined #gluster
01:23 haomaiwa_ joined #gluster
01:29 shyam joined #gluster
01:38 nangthang joined #gluster
01:42 Lee1092 joined #gluster
01:47 ilbot3 joined #gluster
01:47 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
01:52 bennyturns joined #gluster
01:59 sc0001 joined #gluster
02:01 smohan joined #gluster
02:02 harish joined #gluster
02:06 aaronott joined #gluster
02:12 Slashman joined #gluster
02:14 Slashman joined #gluster
02:31 calavera joined #gluster
02:48 coredump joined #gluster
02:53 rafi joined #gluster
03:01 calavera joined #gluster
03:01 kovshenin joined #gluster
03:01 sc0001 joined #gluster
03:08 TheSeven joined #gluster
03:11 coredump joined #gluster
03:20 overclk joined #gluster
03:27 atinm joined #gluster
03:36 nbalacha joined #gluster
03:38 coredump joined #gluster
03:41 shubhendu joined #gluster
03:48 calavera joined #gluster
03:49 itisravi joined #gluster
03:56 hagarth joined #gluster
03:58 rafi joined #gluster
04:02 ppai joined #gluster
04:03 calisto joined #gluster
04:04 gem joined #gluster
04:07 kanagaraj joined #gluster
04:08 harish joined #gluster
04:20 vmallika joined #gluster
04:20 yazhini joined #gluster
04:22 RameshN joined #gluster
04:24 kotreshhr joined #gluster
04:30 rafi joined #gluster
04:41 jwd joined #gluster
04:44 jwaibel joined #gluster
04:48 ndarshan joined #gluster
04:50 ramteid joined #gluster
04:51 deepakcs joined #gluster
04:51 kovshenin joined #gluster
04:52 bharata-rao joined #gluster
04:54 pdrakeweb joined #gluster
05:01 vimal joined #gluster
05:06 pppp joined #gluster
05:10 SOLDIERz joined #gluster
05:17 kdhananjay joined #gluster
05:17 jiffin joined #gluster
05:19 smohan joined #gluster
05:20 hchiramm joined #gluster
05:25 ashiq joined #gluster
05:25 hgowtham joined #gluster
05:27 chirino joined #gluster
05:28 Manikandan joined #gluster
05:30 Bhaskarakiran joined #gluster
05:32 veonik left #gluster
05:37 veonik joined #gluster
05:37 Philambdo joined #gluster
05:40 hagarth joined #gluster
05:43 gem joined #gluster
05:54 atalur joined #gluster
06:01 jwd joined #gluster
06:04 anil joined #gluster
06:09 kayn__ joined #gluster
06:11 jwaibel joined #gluster
06:11 raghu joined #gluster
06:11 maveric_amitc_ joined #gluster
06:13 jwd_ joined #gluster
06:13 jtux joined #gluster
06:14 s19n joined #gluster
06:16 vmallika joined #gluster
06:23 jwd joined #gluster
06:24 aravindavk joined #gluster
06:24 jwd joined #gluster
06:31 kampnerj joined #gluster
06:35 dusmant joined #gluster
06:36 lchabert joined #gluster
06:36 s19n joined #gluster
06:38 vimal joined #gluster
06:40 nbalacha joined #gluster
06:41 jtux joined #gluster
06:47 hagarth joined #gluster
06:47 shaunm_ joined #gluster
06:50 ira joined #gluster
06:51 sahina joined #gluster
06:59 nbalacha joined #gluster
07:03 skoduri joined #gluster
07:04 anrao joined #gluster
07:09 meghanam joined #gluster
07:26 dgbaley joined #gluster
07:50 dusmant joined #gluster
07:52 ramky joined #gluster
08:07 nangthang joined #gluster
08:11 Philambdo joined #gluster
08:17 nbalacha joined #gluster
08:27 ctria joined #gluster
08:34 LebedevRI joined #gluster
08:39 anrao joined #gluster
08:40 itisravi_ joined #gluster
08:42 vimal joined #gluster
08:45 itisravi joined #gluster
08:49 smohan_ joined #gluster
08:51 ndarshan joined #gluster
08:51 shubhendu joined #gluster
08:51 dusmant joined #gluster
08:52 s19n joined #gluster
08:54 sahina joined #gluster
08:56 veonik left #gluster
09:01 mdavidson joined #gluster
09:04 glusterbot News from newglusterbugs: [Bug 1247529] [geo-rep]: rename followed by deletes causes ESTALE <https://bugzilla.redhat.co​m/show_bug.cgi?id=1247529>
09:07 deniszh joined #gluster
09:07 Philambdo joined #gluster
09:31 Manikandan joined #gluster
09:31 sahina joined #gluster
09:35 lchabert left #gluster
09:50 ndarshan joined #gluster
09:50 Leildin joined #gluster
09:51 sahina joined #gluster
09:51 Manikandan joined #gluster
09:52 shubhendu joined #gluster
09:59 atinm joined #gluster
10:02 necrogami joined #gluster
10:13 anrao joined #gluster
10:16 ndarshan joined #gluster
10:16 B21956 joined #gluster
10:26 vmallika joined #gluster
10:33 atinm joined #gluster
10:40 ira joined #gluster
10:43 necrogami joined #gluster
10:46 nsoffer joined #gluster
10:47 cleong joined #gluster
10:56 _Bryan_ joined #gluster
10:57 shubhendu joined #gluster
10:57 dusmant joined #gluster
10:58 ndarshan joined #gluster
11:03 kovshenin joined #gluster
11:06 RameshN joined #gluster
11:18 sage_ joined #gluster
11:28 TvL2386 joined #gluster
11:31 shubhendu joined #gluster
11:31 dusmant joined #gluster
11:34 Bhaskarakiran joined #gluster
11:47 kovsheni_ joined #gluster
11:52 overclk joined #gluster
11:56 kotreshhr left #gluster
12:00 rafi REMINDER: Gluster Community Bug Triage meeting starting  #gluster-meeting
12:15 jtux joined #gluster
12:19 gletessier joined #gluster
12:23 jrm16020 joined #gluster
12:35 glusterbot News from newglusterbugs: [Bug 1245380] [RFE] Render all mounts of a volume defunct upon access revocation <https://bugzilla.redhat.co​m/show_bug.cgi?id=1245380>
12:35 glusterbot News from newglusterbugs: [Bug 1245966] log files removexattr() can't find a specified key or value <https://bugzilla.redhat.co​m/show_bug.cgi?id=1245966>
12:35 glusterbot News from newglusterbugs: [Bug 1246024] gluster commands space in brick path fails <https://bugzilla.redhat.co​m/show_bug.cgi?id=1246024>
12:35 glusterbot News from newglusterbugs: [Bug 1245331] volume start  command is failing  when  glusterfs compiled  with debug enabled <https://bugzilla.redhat.co​m/show_bug.cgi?id=1245331>
12:45 shubhendu joined #gluster
12:48 ctria joined #gluster
12:49 coredump joined #gluster
12:51 dusmant joined #gluster
12:53 skoduri joined #gluster
12:55 julim joined #gluster
12:55 nsoffer joined #gluster
12:59 hagarth joined #gluster
12:59 mpietersen joined #gluster
13:00 chirino joined #gluster
13:00 mpietersen joined #gluster
13:03 ppp joined #gluster
13:13 ctria joined #gluster
13:17 dusmant joined #gluster
13:20 DV joined #gluster
13:27 dgandhi joined #gluster
13:37 nsoffer joined #gluster
13:44 theron joined #gluster
13:45 theron joined #gluster
13:48 vmallika joined #gluster
13:53 overclk joined #gluster
13:56 kotreshhr joined #gluster
13:59 arcolife joined #gluster
14:01 R0ok_ joined #gluster
14:03 gletessier joined #gluster
14:05 theron_ joined #gluster
14:07 aaronott joined #gluster
14:09 shyam joined #gluster
14:14 neofob joined #gluster
14:15 chirino joined #gluster
14:16 vimal joined #gluster
14:18 dijuremo joined #gluster
14:19 bennyturns joined #gluster
14:19 rwheeler joined #gluster
14:20 _Bryan_ joined #gluster
14:22 necrogami joined #gluster
14:26 Manikandan joined #gluster
14:28 beeradb joined #gluster
14:30 kanagaraj joined #gluster
14:30 togdon joined #gluster
14:32 _maserati joined #gluster
14:33 theron joined #gluster
14:34 dusmant joined #gluster
14:35 glusterbot News from newglusterbugs: [Bug 1244721] glusterd: Porting left out log messages to new logging API <https://bugzilla.redhat.co​m/show_bug.cgi?id=1244721>
14:37 hgowtham joined #gluster
14:40 bennyturns joined #gluster
14:40 jobewan joined #gluster
14:43 dusmant joined #gluster
14:46 jobewan joined #gluster
14:47 ubertux joined #gluster
14:48 ubertux Hi guys !! I have a distributed-replicated 4x2 gluster farm. I am trying to create a 512MB file but I am getting "no space left on device" despite there being 1.2GB free in the gluster mountpoint. I have also checked free inodes and that is not an issue
14:48 harish joined #gluster
14:49 l0uis ubertux: how much space do you have on the bricks themselves
14:49 necrogami joined #gluster
14:49 ubertux Some of the bricks are showing free space, others are full. I have also tried amending cluster.min-free-disk to a low MB value
14:50 l0uis well i suspect gluster is selecting a node where the file wont fit, and thus you can't write it.
14:51 ubertux OK, is there a way of selecting a node that does have free space ?
14:51 l0uis I dont think know, sorry. I think your solution is going to be to add more space and rebalance.
14:51 ubertux I have 1.2GB free and I have tried a rebalance and fix-layout already
14:52 ubertux It won't even let me create a 256MB file; 160MB seems the upper limit.
14:52 l0uis are the bricks the same size?
14:52 ubertux Yes, all the bricks are 1GB
14:53 l0uis well, like I said, I don't know if you can influence the DHT, but I suspect your just hitting practical limits with respect to file size vs available sapce etc.
14:54 l0uis 1 GB bricks?
14:54 ubertux yes, it's a test gluster
14:54 l0uis i see. well. your trying to store a file that = 25% of the total volume space.
14:54 l0uis you can see how lumpy allocation is going to cause trouble doing that.
14:55 l0uis presumably you've stored other very large files into this test volume as well
14:55 l0uis if that is your use case i think striping is the answer, but again, IANAGE :)
14:55 ubertux ah ok, makes sense. Is there a way to convert to striping or do i need to recreate the whole thing from scratch /
14:56 l0uis i highly suspect you'd have to re-create the volume, but i dont know for sure
14:56 ubertux Ok, thanks for all your suggestions and explanation
14:57 l0uis no problem, hope it helped.
15:07 jcastill1 joined #gluster
15:08 dgbaley joined #gluster
15:12 jcastillo joined #gluster
15:16 theron_ joined #gluster
15:17 B21956 joined #gluster
15:32 jwd joined #gluster
15:40 jwd joined #gluster
15:42 sc0001 joined #gluster
15:45 cholcombe joined #gluster
15:50 R0ok_ joined #gluster
15:50 and` hi, I detected a split brain on a folder I could re-generate and I thought it was going to be easier to just delete it from both .glusterfs and the relevant volume storage and re-create it again. The folder is however still kept on the split-brain list of files, is there a way to let glusterfs recheck all the files / folders it previously had on the split-brain list?
15:52 Manikandan joined #gluster
15:52 veonik_ joined #gluster
15:53 sankarshan_ joined #gluster
16:03 kanagaraj joined #gluster
16:05 jonb1 joined #gluster
16:08 s19n left #gluster
16:17 skoduri joined #gluster
16:23 calavera joined #gluster
16:29 and` is there a way to turn shd as off on glusterfs 3.5 series?
16:31 ron-slc joined #gluster
16:33 meghanam joined #gluster
16:38 theron joined #gluster
16:38 firemanxbr joined #gluster
16:51 shyam joined #gluster
16:56 calisto joined #gluster
17:04 overclk joined #gluster
17:07 Rapture joined #gluster
17:08 Slashman joined #gluster
17:17 jwd joined #gluster
17:18 sc0001 joined #gluster
17:21 jonb1 joined #gluster
17:25 uebera|| joined #gluster
17:35 nsoffer joined #gluster
17:46 captainflannel joined #gluster
17:50 timotheus1 joined #gluster
17:50 timotheus1 If i replace a brick and do a heal full will it lock any files?
17:51 timotheus1 because based on the amount of data it has to move, it'll take two days
17:52 jcastill1 joined #gluster
17:52 haomaiw__ joined #gluster
17:55 aaronott joined #gluster
17:57 jcastillo joined #gluster
18:07 B21956 joined #gluster
18:08 captainflannel if I add several bricks to increase space on a volume and then run a rebalance, will this trigger files getting locked?
18:10 timotheus1 captainflannel, we've done rebalancing and it was fine
18:11 captainflannel if a file is locked by another client will it just skip that item and continue with the next during rebalance?
18:15 sc0001_ joined #gluster
18:15 togdon joined #gluster
18:23 plarsen joined #gluster
18:28 sc0001 joined #gluster
18:30 haomaiwa_ joined #gluster
18:43 neofob left #gluster
18:54 ron-slc joined #gluster
18:58 dijuremo joined #gluster
19:00 coredump joined #gluster
19:05 rotbeard joined #gluster
19:07 nzero joined #gluster
19:11 nzero looking at using gluster for a compute cluster running mesos. my boss is concerned about network performance if a job is run on node1 but the file is on node2 and wanted to know if there was a way to identify which server the data was stored on so the job could be scheduled there, to reduce network traffic. anyone have experience with this kind of processing/storage concern and if it makes sense to try and locate the file (and
19:11 nzero how to do it?)
19:11 Rapture joined #gluster
19:12 chirino joined #gluster
19:16 deniszh joined #gluster
19:24 calavera joined #gluster
19:26 captainflannel nzero, i dont think gluster has any way to report location of the file and its replicas
19:37 togdon joined #gluster
19:41 JoeJulian @which brick
19:41 glusterbot JoeJulian: To determine on which brick(s) a file resides, run getfattr -n trusted.glusterfs.pathinfo $file through the client mount.
19:41 JoeJulian nzero: ^
19:41 nzero ah, ok. thanks
19:41 JoeJulian captainflannel: You might be interested in that factoid as well.
19:44 JoeJulian timotheus1: It locks the chunk of the file it's working on, not the whole file. The locked chunk is sufficiently small to prevent noticeable interference.
19:44 Pupeno joined #gluster
19:44 Pupeno joined #gluster
19:45 timotheus1 JoeJulian, we had bricks that were offline, we brought them back online, and then the self heal would cause a lot of locking and timeouts on reads
19:45 timotheus1 we disabled auto self healing and plan on doing a manual heal this weekend
19:46 JoeJulian captainflannel: It *should* (but I haven't read through the source to verify this) move the file to the new brick without locking it, and if there's a client lock - migrate that lock to the new brick.
19:46 JoeJulian timotheus1: That would probably be the client-side self-heals that exceed the background-self-heal limit.
19:47 JoeJulian Once the background queue is full, any additional heals the client needs are for some reason done in the foreground.
19:47 JoeJulian If it were up to me, the client-initiated self-heals that exceed the background queue would be marked for the self-heal daemon to take care of.
19:48 * JoeJulian needs to file a bug
19:48 glusterbot https://bugzilla.redhat.com/en​ter_bug.cgi?product=GlusterFS
19:48 timotheus1 JoeJulian, what is the default value of background self heal
19:48 JoeJulian 16
19:48 timotheus1 ok that makes sense now
19:49 timotheus1 we have like millions of small files and that did not have a good time ;)
19:51 _Bryan_ joined #gluster
19:56 deniszh joined #gluster
19:57 JoeJulian timotheus1: if it were me, I might disable self-heal checks at the client, and leave it up to the self-heal daemon.
19:58 ghenry joined #gluster
19:58 ghenry joined #gluster
20:01 l0uis JoeJulian: Why might getfattr show nothing? I'm just curiously playing around w/ it on my volume, but I'm not getting any attributes back.
20:02 Pupeno joined #gluster
20:02 Pupeno joined #gluster
20:04 deniszh joined #gluster
20:05 timotheus1 joined #gluster
20:06 JoeJulian l0uis: not root?
20:06 l0uis JoeJulian: tried w/ and w/o sudo
20:07 glusterbot News from newglusterbugs: [Bug 1247763] Foreground self-heals should be removed <https://bugzilla.redhat.co​m/show_bug.cgi?id=1247763>
20:07 l0uis hm
20:07 glusterbot News from newglusterbugs: [Bug 1247765] Glusterfsd crashes because of thread-unsafe code in gf_authenticate <https://bugzilla.redhat.co​m/show_bug.cgi?id=1247765>
20:07 JoeJulian Show me your command
20:07 nsoffer joined #gluster
20:08 deniszh joined #gluster
20:09 timotheus1 yo JoeJulian one last question. f these are the options:
20:09 timotheus1 cluster.self-heal-daemon: off
20:09 timotheus1 cluster.metadata-self-heal: off
20:09 timotheus1 cluster.entry-self-heal: off
20:09 timotheus1 cluster.data-self-heal: off
20:09 timotheus1 will a manual heal still run ?
20:10 JoeJulian timotheus1: no
20:10 JoeJulian no heals will run at all.
20:10 JoeJulian The first rule turns off the self-heal daemon.
20:11 JoeJulian The next three turn off all heals at the client.
20:11 l0uis JoeJulian: https://paste.fedoraproject.org/249096/43811425/
20:12 timotheus1 JoeJulian, so the self-heal daemon is required for manual heals?
20:12 JoeJulian l0uis: Oh, that's the client. It'll only show xattrs set through the client, like user xattrs or security and the like. The brick xattrs are filtered out by the translators.
20:13 Pupeno joined #gluster
20:13 l0uis Oh. So when you said "run getfattr through the client mount" what did you mean?
20:14 JoeJulian timotheus1: Not sure what you mean by "manual" but if you want your replicas to be replicated after a server has down time, you need either the self-heal daemon or the client self-heals to be available.
20:15 JoeJulian To me a "manual" self-heal is done through the client with a "stat $file" to queue the background self-heal.
20:15 JoeJulian l0uis: getfattr -n trusted.glusterfs.pathinfo $file
20:15 JoeJulian "-m ." won't work as the attribute is not returned via regex.
20:15 l0uis JoeJulian: Oh doh I see what I did.
20:16 shaunm_ joined #gluster
20:16 l0uis JoeJulian: i tried the -n cmd first, got nothing. Then tried to dump everything, got nothing. Then tried to dump everything w/ sudo, and still nothing. But I see if I use -n and sudo it works.
20:16 l0uis JoeJulian: thanks
20:16 JoeJulian awesome. You're welcome.
20:17 JoeJulian @change "which brick" 1 s/mount./mount as root./
20:17 glusterbot JoeJulian: Error: The command "change" is available in the Factoids, Herald, and Topic plugins.  Please specify the plugin whose command you wish to call by using its name as a command before "change".
20:17 JoeJulian @factoids change "which brick" 1 s/mount./mount as root./
20:17 glusterbot JoeJulian: The operation succeeded.
20:22 _maserati My production gluster is currently sitting on 3.6.1, should I upgrade to the latest 3.6.x ? Any reason for moving to 3.7 yet?
20:22 julim joined #gluster
20:32 timotheus1 JoeJulian, i mean manual by running the command: gluster volume heal VOLNAME
20:36 shyam joined #gluster
20:41 JoeJulian _maserati: yes to 3.6 minor upgrade. As for 3.7, I usually avoid feature jumps unless the feature set meets a need or if bug fixes are no longer being backported.
20:41 JoeJulian You're safe on both counts.
20:41 deniszh joined #gluster
20:41 JoeJulian timotheus1: Then you would absolutely need the self-heal daemon for that.
20:44 _maserati JoeJulian, thank you sir
20:45 cc1 joined #gluster
20:48 cc1 Hey all. Trying to setup fresh install of OS (CentOS 6.6 ) and GlusterFS  (3.7.2). Getting "Error: Request timed out" any time I try and do anything with a volume (create, stop, start). But they do eventually end up performing the operation. Any thoughts on cause?
20:48 deniszh joined #gluster
21:00 JoeJulian cc1: check the logs in /var/log/glusterfs
21:00 JoeJulian There's a cli log that might give a clue.
21:05 cc1 not much info that I can figure out:
21:05 cc1 [cli.c:716:main] 0-cli: Started running gluster with version 3.7.2
21:05 cc1 [cli-cmd-volume.c:1874:cli_check_gsync_present] 0-: geo-replication not installed
21:05 cc1 [event-epoll.c:629:event_dispatch_epoll_worker] 0-epoll: Started thread with index 1
21:06 cc1 [socket.c:2409:socket_event_handler] 0-transport: disconnecting now
21:06 cc1 then it goes until it hits the timeout, and exits with -1
21:11 timotheus1 joined #gluster
21:14 JoeJulian cc1: selinux?
21:14 cc1 disabled
21:14 cc1 iptables off. same subnet, so no firewalls in the way.
21:15 JoeJulian It would be localhost anyway
21:15 csim strace ?
21:16 JoeJulian That's what I would do, too, but I hate asking other people to learn to read that.
21:27 JoeJulian cc1: see if any other logs increment their timestamps when you get that issue and see if there's any clues in those.
21:29 cc1 so far nothing. going through strace output to see if there is anything of value in there
21:35 chirino joined #gluster
21:36 Pupeno joined #gluster
21:43 togdon joined #gluster
21:46 badone joined #gluster
21:51 yosafbridge joined #gluster
21:55 ron-slc joined #gluster
22:12 cc2 joined #gluster
22:17 haomaiwa_ joined #gluster
22:29 cc2 JoeJulian: found something in strace, comparing it to another create in lab:
22:29 cc2 connect(7, {sa_family=AF_FILE, path="/var/run/gluster/quotad.socket"}, 110) = -1 ENOENT (No such file or directory)
22:29 JoeJulian That seems like a bug. Do you have quota enabled on another volume?
22:30 cc2 nope. deleting volumes as i go
22:38 JoeJulian Ok, so you created a volume with quota, deleted it, then tried to create a new volume.
22:38 JoeJulian I should be able to duplicate that.
22:39 cc2 never created a quota on it
22:39 JoeJulian ... you're just trying to make things difficult for me, aren't you? ;)
22:39 cc2 why yes yes i am
22:40 cc2 gluster volume create  test rep 2 transport tcp node1:/data/brick1/test node2:/data/brick1/test
22:41 JoeJulian Should this install be any different from your lab that isn't having this problem?
22:41 cc2 nope. same procedure. same OS. kernel may be different because i hadn't updated that box yet.
22:43 JoeJulian rpm -qa 'gluster*' | grep rhs
22:43 JoeJulian Make sure that's empty.
22:44 cc2 confirmed no results
22:50 natarej joined #gluster
22:52 nzero joined #gluster
22:53 JoeJulian cc2: Well, I see in the code why that's happening. It should fail instantly and not matter.
22:54 JoeJulian What if you don't specify "transport tcp" in your volume create?
22:58 cc2 still occurs
23:01 cc2 JoeJulian: heading out. will ping you tomorrow
23:02 JoeJulian ok
23:15 cleong joined #gluster
23:16 beeradb_ joined #gluster
23:25 Pupeno_ joined #gluster
23:46 plarsen joined #gluster
23:52 nzero joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary