Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-05-15

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:34 halfinhalfout joined #gluster
00:34 halfinhalfout1 joined #gluster
01:08 harish__ joined #gluster
01:16 lyang0 joined #gluster
01:27 nangthang joined #gluster
02:00 diegows joined #gluster
02:05 joseki joined #gluster
02:06 joseki does changing the performance. options require any sort of restart of the volume or daemons?
02:17 Anjana joined #gluster
02:27 akay does anyone know why deleted files are recreated by gluster?
02:37 milkyline joined #gluster
02:42 bharata-rao joined #gluster
02:56 sakshi joined #gluster
02:57 milkyline joined #gluster
03:01 milkyline joined #gluster
03:07 milkyline left #gluster
03:13 Anjana1 joined #gluster
03:16 Pupeno joined #gluster
03:18 milkyline joined #gluster
03:45 TheSeven joined #gluster
03:52 kanagaraj joined #gluster
03:56 kumar joined #gluster
04:04 RameshN joined #gluster
04:04 kdhananjay joined #gluster
04:09 shubhendu joined #gluster
04:10 rafi joined #gluster
04:10 milkyline joined #gluster
04:13 Anjana joined #gluster
04:14 nishanth joined #gluster
04:25 DV joined #gluster
04:27 dusmant joined #gluster
04:31 gildub joined #gluster
04:37 hgowtham joined #gluster
04:40 ashiq joined #gluster
04:41 ndarshan joined #gluster
04:46 Manikandan joined #gluster
04:46 Manikandan_ joined #gluster
04:56 DV joined #gluster
04:57 pppp joined #gluster
04:58 deepakcs joined #gluster
05:04 Apeksha joined #gluster
05:04 Gill joined #gluster
05:07 nbalacha joined #gluster
05:07 jiffin joined #gluster
05:09 anil_ joined #gluster
05:17 gem joined #gluster
05:18 ppai joined #gluster
05:18 nishanth joined #gluster
05:21 DV joined #gluster
05:23 Bhaskarakiran joined #gluster
05:25 gem joined #gluster
05:38 sripathi joined #gluster
05:45 gem joined #gluster
05:46 dusmant joined #gluster
05:52 glusterbot News from newglusterbugs: [Bug 1221866] DHT Layout selfheal code should log errors <https://bugzilla.redhat.co​m/show_bug.cgi?id=1221866>
05:52 glusterbot News from newglusterbugs: [Bug 1221869] Even after reseting the bitrot and scrub demons are running <https://bugzilla.redhat.co​m/show_bug.cgi?id=1221869>
05:54 karnan joined #gluster
05:58 maveric_amitc_ joined #gluster
06:01 nishanth joined #gluster
06:02 gem joined #gluster
06:04 gem_ joined #gluster
06:06 gem joined #gluster
06:08 DV_ joined #gluster
06:13 nbalacha joined #gluster
06:20 jtux joined #gluster
06:22 glusterbot News from newglusterbugs: [Bug 1217589] glusterd crashed while schdeuler was creating snapshots when bit rot was enabled on the volumes <https://bugzilla.redhat.co​m/show_bug.cgi?id=1217589>
06:43 rjoseph joined #gluster
06:45 elico joined #gluster
06:47 bharata_ joined #gluster
06:51 atalur joined #gluster
06:53 spalai joined #gluster
07:01 sripathi joined #gluster
07:02 harish__ joined #gluster
07:14 spalai joined #gluster
07:17 kdhananjay joined #gluster
07:21 nsoffer joined #gluster
07:23 Philambdo joined #gluster
07:24 Anjana1 joined #gluster
07:27 harish__ joined #gluster
07:34 Slashman joined #gluster
07:40 [Enrico] joined #gluster
07:54 Norky joined #gluster
07:55 hgowtham joined #gluster
07:57 deniszh joined #gluster
07:59 poornimag joined #gluster
08:14 _shaps_ joined #gluster
08:15 nbalacha joined #gluster
08:15 ctria joined #gluster
08:16 bjornar joined #gluster
08:17 hchiramm joined #gluster
08:18 PaulCuzner joined #gluster
08:22 al joined #gluster
08:24 hchiramm joined #gluster
08:25 kovshenin joined #gluster
08:27 T0aD joined #gluster
08:37 spalai left #gluster
08:37 spalai joined #gluster
08:38 deniszh1 joined #gluster
08:38 deniszh1 joined #gluster
08:42 nishanth joined #gluster
08:45 elico joined #gluster
08:50 PaulCuzner joined #gluster
08:53 glusterbot News from newglusterbugs: [Bug 1218976] cluster.nufa :- User should be able to set this option only for  DHT volume (distributed or distributed-replicated or any other combination having distributed) <https://bugzilla.redhat.co​m/show_bug.cgi?id=1218976>
08:58 soumya joined #gluster
09:00 yossarianuk hi I have followed this guide to setup geo-replication  https://access.redhat.com/documentation​/en-US/Red_Hat_Storage/2.1/html/Adminis​tration_Guide/chap-User_Guide-Geo_Rep-P​reparation-Settingup_Environment.html
09:00 yossarianuk and when I use 'gluster volume geo-replication  status'
09:00 yossarianuk I see Status : Active
09:00 yossarianuk but no files are replicating
09:02 yossarianuk sorry - no files are replicating ...
09:03 jiffin1 joined #gluster
09:07 alexcrow joined #gluster
09:11 yosafbridge joined #gluster
09:16 akay joined #gluster
09:23 glusterbot News from newglusterbugs: [Bug 1221914] Implement MKNOD fop in bit-rot. <https://bugzilla.redhat.co​m/show_bug.cgi?id=1221914>
09:23 ninkotech joined #gluster
09:24 ninkotech_ joined #gluster
09:28 Anjana joined #gluster
09:31 jiffin joined #gluster
09:33 nishanth joined #gluster
09:35 vovcia yossarianuk: hi :)
09:35 vovcia yossarianuk: ill be testing geo-rep in minutes :)
09:40 Anjana1 joined #gluster
09:41 hagarth joined #gluster
09:43 harish__ joined #gluster
09:51 Intensity joined #gluster
09:52 elico joined #gluster
09:55 ghenry joined #gluster
10:03 LebedevRI joined #gluster
10:21 Anjana joined #gluster
10:23 glusterbot News from newglusterbugs: [Bug 1221938] SIGNING FAILURE  Error messages  are poping up in the bitd log <https://bugzilla.redhat.co​m/show_bug.cgi?id=1221938>
10:23 glusterbot News from newglusterbugs: [Bug 1221941] glusterfsd: bricks crash while executing ls on nfs-ganesha vers=3 <https://bugzilla.redhat.co​m/show_bug.cgi?id=1221941>
10:28 ivan_rossi joined #gluster
10:35 bene2 joined #gluster
10:38 spalai joined #gluster
10:51 atalur joined #gluster
10:51 Anjana1 joined #gluster
11:00 elico left #gluster
11:00 Anjana joined #gluster
11:08 Pupeno joined #gluster
11:12 atalur joined #gluster
11:13 gildub joined #gluster
11:13 ira joined #gluster
11:14 ira joined #gluster
11:24 glusterbot News from newglusterbugs: [Bug 1221964] After adding brick not able to see the content of the mount and getting "cannot open directory .: Structure needs cleaning" <https://bugzilla.redhat.co​m/show_bug.cgi?id=1221964>
11:24 glusterbot News from newglusterbugs: [Bug 1221967] Do not allow detach-tier commands on a non tiered volume <https://bugzilla.redhat.co​m/show_bug.cgi?id=1221967>
11:24 glusterbot News from newglusterbugs: [Bug 1221969] tiering: use sperate log/socket/pid file for tiering <https://bugzilla.redhat.co​m/show_bug.cgi?id=1221969>
11:35 yossarianuk vovcia: cool - good luck !
11:35 yossarianuk vovcia: I ended up deleting all volumes / var/log/gluster, etc - redoing it 'seems' to be working..
11:37 yossarianuk How can a rate limit a glusterfs geo-replicated volume ?
11:37 yossarianuk (i.e bandwidth / speed limit)
11:41 kkeithley1 joined #gluster
11:54 glusterbot News from newglusterbugs: [Bug 1221980] bitd log grows rapidly if brick goes down <https://bugzilla.redhat.co​m/show_bug.cgi?id=1221980>
12:01 chirino joined #gluster
12:02 rafi1 joined #gluster
12:03 pdrakeweb joined #gluster
12:11 Forcepoint joined #gluster
12:13 aaronott joined #gluster
12:16 sripathi left #gluster
12:18 plarsen joined #gluster
12:19 Forcepoint Hi guys, Im having a problem. I have a 2 node replicated setup. Today a brick failed and I wanted to replace it. So what I did was bring down the failed server, replaced the broken hard drive and restart the server. Self-replication doesnt seem to work now. In the brick log I get these kinds of errors now:
12:19 Forcepoint [2015-05-15 12:19:32.945206] I [server-rpc-fops.c:475:server_mkdir_cbk] 0-data-server: 36530: MKDIR /web (00000000-0000-0000-0000-000000000001/web) ==> (Permission denied)
12:20 Forcepoint This only happens for folders that are not owned by root. Folders owned by root heal fine
12:20 Forcepoint Can anyone help?
12:31 bfoster joined #gluster
12:32 rafi joined #gluster
12:35 rafi1 joined #gluster
12:47 nsoffer joined #gluster
12:47 sysadmin-di2e1 left #gluster
12:51 Forcepoint joined #gluster
12:55 rafi joined #gluster
12:56 glusterbot News from resolvedglusterbugs: [Bug 905747] [FEAT] Tier support for Volumes <https://bugzilla.redhat.com/show_bug.cgi?id=905747>
13:14 wushudoin joined #gluster
13:18 bturner joined #gluster
13:20 liquidat joined #gluster
13:25 dgandhi joined #gluster
13:27 glusterbot News from resolvedglusterbugs: [Bug 1218638] tiering documentation <https://bugzilla.redhat.co​m/show_bug.cgi?id=1218638>
13:29 firemanxbr joined #gluster
13:30 hamiller joined #gluster
13:32 georgeh-LT2 joined #gluster
13:35 jmarley joined #gluster
13:38 nbalacha joined #gluster
13:42 halfinhalfout joined #gluster
13:43 poornimag joined #gluster
13:45 yossarianuk hi - I have a geo-rep setup - to save time I copied most of the files into the slave (and used that partition as a brick) - if I look at
13:45 yossarianuk gluster volume geo-replication master-vol repository3::slave-vol status
13:45 yossarianuk It shows active - however no files are being replicated
13:45 yossarianuk what can I do to troubleshoot this ?
13:50 nsoffer joined #gluster
13:50 ndevos yossarianuk: did you copy the files with their extended attributes to the bricks? in that case you should be able to ,,(self heal) the slave volume
13:50 glusterbot yossarianuk: I do not know about 'self heal', but I do know about these similar topics: 'targeted self heal'
13:51 ndevos ~targeted self heal | yossarianuk
13:51 glusterbot yossarianuk: https://web.archive.org/web/20130314122636/htt​p://community.gluster.org/a/howto-targeted-sel​f-heal-repairing-less-than-the-whole-volume/
13:51 srepetsk joined #gluster
13:52 jeek joined #gluster
13:52 ndevos yossarianuk: I'm not a geo-rep expert, but I think it should be possible that way - without xattrs, it would be very difficult
13:58 David_H__ joined #gluster
13:58 glusterbot News from resolvedglusterbugs: [Bug 1188184] NFS-Ganesha new features support for 3.7 <https://bugzilla.redhat.co​m/show_bug.cgi?id=1188184>
14:09 archers joined #gluster
14:12 Gill joined #gluster
14:14 archers left #gluster
14:15 srepetsk hey all, i'm having this duplicate UUID issue in my cluster (http://www.gluster.org/pipermail/gluste​r-users.old/2015-February/020668.html); any recommendations for hte best way to remove the dup?
14:20 srepetsk remove brick, remove duplicate, re-add brick is what comes to mind
14:24 glusterbot News from newglusterbugs: [Bug 1222023] dht: buffer overrun in dht-rename <https://bugzilla.redhat.co​m/show_bug.cgi?id=1222023>
14:30 vimal joined #gluster
14:48 coredump joined #gluster
14:52 yossarianuk i just cannot get it to work if I add files to the slave (to save bandwidth)
14:52 yossarianuk it was geo-replication is 'active' but no files are sent
14:53 yossarianuk it *says* geo-replication is 'active' but no files are sent
14:53 yossarianuk i've tried to re-setup the brick + geo rep -> same
14:56 yossarianuk I looked at tthis - >  https://web.archive.org/web/20130314122636/htt​p://community.gluster.org/a/howto-targeted-sel​f-heal-repairing-less-than-the-whole-volume/
15:01 kovshenin joined #gluster
15:03 yossarianuk can anyone offer any advice of where to look - i'm on the verge of deleting everything and starting again (for about the 5th time..)
15:04 yossarianuk why would the status of geo-rep be ''active'' but no files are replicated ?
15:09 milkyline joined #gluster
15:14 rafi joined #gluster
15:24 rwheeler_ joined #gluster
15:28 dusmant joined #gluster
15:58 jfdoucet I have a glusterfs in distributed replicate with 12 nodes and 1 of the node shows 2751 gfid files when running "gluster volume heal" and if I check one I see that it is not a hard link. I read over the web that they are orphaned. My question is, is it safe to just remove them (rm [FILE]) or there is a better way of dealing with them ?
16:13 JoeJulian yossarianuk: "i just cannot get it to work if I add files to the slave" geo-replication is unidirectional. From master to slave.
16:14 cholcombe joined #gluster
16:17 lkoranda joined #gluster
16:19 JoeJulian jfdoucet: If the link count is 1 then it's safe to remove them, ie. find .glusterfs/[0-9a-f][0-9a-f] -type f -links 1 -print0| xargs -0 /bin/rm
16:21 jfdoucet JoeJulian, ok I will look into that, thanks
16:29 yossarianuk JoeJulian: in the end its working
16:30 yossarianuk I menat added files on the slave that existed in the master brick btw
16:30 yossarianuk I know geo-rep is one way
16:35 marcoceppi joined #gluster
16:35 marcoceppi joined #gluster
16:43 alexcrow joined #gluster
16:54 kdhananjay joined #gluster
16:55 glusterbot News from newglusterbugs: [Bug 1222065] GlusterD fills the logs when the NFS-server is disabled <https://bugzilla.redhat.co​m/show_bug.cgi?id=1222065>
17:03 Rapture joined #gluster
17:03 glusterbot News from resolvedglusterbugs: [Bug 1141733] data loss when rebalance + renames are in progress and bricks from replica pairs goes down and comes back <https://bugzilla.redhat.co​m/show_bug.cgi?id=1141733>
17:03 glusterbot News from resolvedglusterbugs: [Bug 1139998] Renaming file while rebalance is in progress causes data loss <https://bugzilla.redhat.co​m/show_bug.cgi?id=1139998>
17:03 glusterbot News from resolvedglusterbugs: [Bug 1140348] Renaming file while rebalance is in progress causes data loss <https://bugzilla.redhat.co​m/show_bug.cgi?id=1140348>
17:03 glusterbot News from resolvedglusterbugs: [Bug 1138395] Renaming file while rebalance is in progress causes data loss <https://bugzilla.redhat.co​m/show_bug.cgi?id=1138395>
17:12 ira joined #gluster
17:17 ndevos be prepared, more glusterbot messages incoming!
17:25 glusterbot News from newglusterbugs: [Bug 1218479] Gluster NFS Mount Permission Denied Error (Occur Intermittent) <https://bugzilla.redhat.co​m/show_bug.cgi?id=1218479>
17:29 jackdpeterson joined #gluster
17:35 glusterbot News from resolvedglusterbugs: [Bug 1219358] Disperse volume: client crashed while running iozone <https://bugzilla.redhat.co​m/show_bug.cgi?id=1219358>
17:35 glusterbot News from resolvedglusterbugs: [Bug 1125431] GD_OP_VERSION_MAX should now have 30700 <https://bugzilla.redhat.co​m/show_bug.cgi?id=1125431>
17:35 glusterbot News from resolvedglusterbugs: [Bug 1208067] [SNAPSHOT]: Snapshot create fails while using scheduler to create snapshots <https://bugzilla.redhat.co​m/show_bug.cgi?id=1208067>
17:35 glusterbot News from resolvedglusterbugs: [Bug 1209112] [Snapshot]  Scheduler should display error message when shared storage is not mounted on node,  when checking snap_scheduler.py status <https://bugzilla.redhat.co​m/show_bug.cgi?id=1209112>
17:35 glusterbot News from resolvedglusterbugs: [Bug 1210204] [SNAPSHOT] - Unable to delete scheduled jobs <https://bugzilla.redhat.co​m/show_bug.cgi?id=1210204>
17:35 glusterbot News from resolvedglusterbugs: [Bug 1218575] Snapshot-scheduling helper script errors out while running "snap_scheduler.py init" <https://bugzilla.redhat.co​m/show_bug.cgi?id=1218575>
17:35 glusterbot News from resolvedglusterbugs: [Bug 949242] Introduce fallocate support <https://bugzilla.redhat.com/show_bug.cgi?id=949242>
17:35 glusterbot News from resolvedglusterbugs: [Bug 921942] iobuf: Add a function iobref_clear <https://bugzilla.redhat.com/show_bug.cgi?id=921942>
17:35 glusterbot News from resolvedglusterbugs: [Bug 1143886] when brick is down, rdma fuse mounting hangs for volumes with tcp,rdma as transport. <https://bugzilla.redhat.co​m/show_bug.cgi?id=1143886>
17:35 glusterbot News from resolvedglusterbugs: [Bug 1209117] [Snapshot]  Unable to edit, list , delete scheduled jobs when scheduler is disable <https://bugzilla.redhat.co​m/show_bug.cgi?id=1209117>
17:35 glusterbot News from resolvedglusterbugs: [Bug 1218562] Fix memory leak while using scandir <https://bugzilla.redhat.co​m/show_bug.cgi?id=1218562>
17:35 glusterbot News from resolvedglusterbugs: [Bug 1219848] Directories are missing on the mount point after attaching tier to distribute replicate volume. <https://bugzilla.redhat.co​m/show_bug.cgi?id=1219848>
17:35 glusterbot News from resolvedglusterbugs: [Bug 921232] No structured logging in glusterfs-hadoop <https://bugzilla.redhat.com/show_bug.cgi?id=921232>
17:35 glusterbot News from resolvedglusterbugs: [Bug 1209129] DHT Rebalancing within a tier will cause the file to lose its heat(database) metadata <https://bugzilla.redhat.co​m/show_bug.cgi?id=1209129>
17:35 glusterbot News from resolvedglusterbugs: [Bug 1220100] Typos in the messages logged by the CTR translator <https://bugzilla.redhat.co​m/show_bug.cgi?id=1220100>
17:35 glusterbot News from resolvedglusterbugs: [Bug 1219467] tools/glusterfind: Support Partial Find feature <https://bugzilla.redhat.co​m/show_bug.cgi?id=1219467>
17:35 glusterbot News from resolvedglusterbugs: [Bug 1138503] [RFE] Improve debuggability of glusterd <https://bugzilla.redhat.co​m/show_bug.cgi?id=1138503>
17:35 glusterbot News from resolvedglusterbugs: [Bug 1217429] geo-rep: add debug logs to master for slave ENTRY operation failures <https://bugzilla.redhat.co​m/show_bug.cgi?id=1217429>
17:35 glusterbot News from resolvedglusterbugs: [Bug 949400] Log messages for start of crawl, end of crawl and number of files self-healed has to be reported in glustershd.log file <https://bugzilla.redhat.com/show_bug.cgi?id=949400>
17:35 glusterbot News from resolvedglusterbugs: [Bug 1167793] fsync on write-behind doesn't wait for pending writes when an error is encountered <https://bugzilla.redhat.co​m/show_bug.cgi?id=1167793>
17:35 glusterbot News from resolvedglusterbugs: [Bug 1217723] Upcall: xlator options for Upcall xlator <https://bugzilla.redhat.co​m/show_bug.cgi?id=1217723>
17:35 glusterbot News from resolvedglusterbugs: [Bug 1218567] Upcall: Cleanup the expired upcall entries <https://bugzilla.redhat.co​m/show_bug.cgi?id=1218567>
17:35 glusterbot News from resolvedglusterbugs: [Bug 1215550] glusterfsd crashed after directory was removed from the mount point, while self-heal and rebalance were running on the volume <https://bugzilla.redhat.co​m/show_bug.cgi?id=1215550>
17:35 glusterbot News from resolvedglusterbugs: [Bug 1217135] readdir-ahead needs to be enabled by default for new volumes on gluster-3.7 <https://bugzilla.redhat.co​m/show_bug.cgi?id=1217135>
17:35 glusterbot News from resolvedglusterbugs: [Bug 1176543] RDMA: GFAPI benchmark segfaults when ran with greater than 2 threads, no segfaults are seen over TCP <https://bugzilla.redhat.co​m/show_bug.cgi?id=1176543>
17:35 glusterbot News from resolvedglusterbugs: [Bug 1139997] rebalance is not resulting in the hash layout changes being available to nfs client <https://bugzilla.redhat.co​m/show_bug.cgi?id=1139997>
17:35 glusterbot News from resolvedglusterbugs: [Bug 1151308] data loss when rebalance + renames are in progress and bricks from replica pairs goes down and comes back <https://bugzilla.redhat.co​m/show_bug.cgi?id=1151308>
17:35 glusterbot News from resolvedglusterbugs: [Bug 1219842] [RFE] Data Tiering:Need a way from CLI to identify hot and cold tier bricks easily <https://bugzilla.redhat.co​m/show_bug.cgi?id=1219842>
17:35 glusterbot News from resolvedglusterbugs: [Bug 1219850] Data Tiering: attaching a tier with non supported replica count crashes glusterd on local host <https://bugzilla.redhat.co​m/show_bug.cgi?id=1219850>
17:35 glusterbot News from resolvedglusterbugs: [Bug 1220047] Data Tiering:3.7.0:data loss:detach-tier not flushing data to cold-tier <https://bugzilla.redhat.co​m/show_bug.cgi?id=1220047>
17:35 glusterbot News from resolvedglusterbugs: [Bug 1220050] Data Tiering:UI:when a user looks for detach-tier help, instead command seems to be getting executed <https://bugzilla.redhat.co​m/show_bug.cgi?id=1220050>
17:35 glusterbot News from resolvedglusterbugs: [Bug 1220051] Data Tiering: Volume inconsistency errors getting logged when attaching uneven(odd) number of hot bricks in hot tier(pure distribute tier layer) to a dist-rep volume <https://bugzilla.redhat.co​m/show_bug.cgi?id=1220051>
17:35 glusterbot News from resolvedglusterbugs: [Bug 1140338] rebalance is not resulting in the hash layout changes being available to nfs client <https://bugzilla.redhat.co​m/show_bug.cgi?id=1140338>
17:35 glusterbot News from resolvedglusterbugs: [Bug 1157976] AFR gives EROFS when fop fails on all subvolumes when client-quorum is enabled <https://bugzilla.redhat.co​m/show_bug.cgi?id=1157976>
17:35 glusterbot News from resolvedglusterbugs: [Bug 1138393] rebalance is not resulting in the hash layout changes being available to nfs client <https://bugzilla.redhat.co​m/show_bug.cgi?id=1138393>
17:35 glusterbot News from resolvedglusterbugs: [Bug 928646] Rebalance fails on all the nodes when glusterd is down on one of the nodes in the cluster <https://bugzilla.redhat.com/show_bug.cgi?id=928646>
17:35 glusterbot News from resolvedglusterbugs: [Bug 1139195] vdsm invoked oom-killer during rebalance and Killed process 4305, UID 0, (glusterfs nfs process) <https://bugzilla.redhat.co​m/show_bug.cgi?id=1139195>
17:36 glusterbot News from resolvedglusterbugs: [Bug 1192435] server crashed during rebalance in dht_selfheal_layout_new_directory <https://bugzilla.redhat.co​m/show_bug.cgi?id=1192435>
17:36 glusterbot News from resolvedglusterbugs: [Bug 1196019] Any op on files in the root directory of the volume fails unless absolute path is specified. <https://bugzilla.redhat.co​m/show_bug.cgi?id=1196019>
17:36 glusterbot News from resolvedglusterbugs: [Bug 1191919] Disperse volume: Input/output error when listing files/directories under nfs mount <https://bugzilla.redhat.co​m/show_bug.cgi?id=1191919>
17:36 glusterbot News from resolvedglusterbugs: [Bug 958108] Fuse mount crashes while running FSCT tool on the Samba Share from a windows client <https://bugzilla.redhat.com/show_bug.cgi?id=958108>
17:36 glusterbot News from resolvedglusterbugs: [Bug 1136349] DHT - remove-brick - data loss - when remove-brick with 'start' is in progress, perform rename operation on files. commit remove-brick, after status is 'completed' and few files are missing. <https://bugzilla.redhat.co​m/show_bug.cgi?id=1136349>
17:36 glusterbot News from resolvedglusterbugs: [Bug 1218584] RFE: Clone of a snapshot <https://bugzilla.redhat.co​m/show_bug.cgi?id=1218584>
17:36 glusterbot News from resolvedglusterbugs: [Bug 1220020] status.brick memory allocation failure. <https://bugzilla.redhat.co​m/show_bug.cgi?id=1220020>
17:36 glusterbot News from resolvedglusterbugs: [Bug 1200879] initialize child_down_cond conditional variable. <https://bugzilla.redhat.co​m/show_bug.cgi?id=1200879>
17:36 glusterbot News from resolvedglusterbugs: [Bug 1161518] DHT + rebalance :-  skipped file count is always 'zero' even though rebalance has skipped many files . <https://bugzilla.redhat.co​m/show_bug.cgi?id=1161518>
17:36 glusterbot News from resolvedglusterbugs: [Bug 1201724] Handle the review comments in bit-rot patches <https://bugzilla.redhat.co​m/show_bug.cgi?id=1201724>
17:36 glusterbot News from resolvedglusterbugs: [Bug 1105147] Setting either of user.cifs or user.smb to disable should disable smb shares when the smb share is already available <https://bugzilla.redhat.co​m/show_bug.cgi?id=1105147>
17:36 glusterbot News from resolvedglusterbugs: [Bug 1218566] upcall: polling is done for a invalid file <https://bugzilla.redhat.co​m/show_bug.cgi?id=1218566>
17:36 glusterbot News from resolvedglusterbugs: [Bug 1207054] BitRot :- Object versions is not incremented some times <https://bugzilla.redhat.co​m/show_bug.cgi?id=1207054>
17:55 glusterbot News from newglusterbugs: [Bug 1222088] Data Tiering:3.7.0:data loss:detach-tier not flushing data to cold-tier <https://bugzilla.redhat.co​m/show_bug.cgi?id=1222088>
18:16 redbeard joined #gluster
18:23 nsoffer joined #gluster
18:24 Forcepoint joined #gluster
18:26 kovshenin joined #gluster
18:26 gem joined #gluster
18:27 atinmu joined #gluster
18:28 sage joined #gluster
18:42 Gill joined #gluster
18:54 coredump joined #gluster
19:00 shaunm joined #gluster
19:10 Forcepoint_ joined #gluster
19:19 shaunm joined #gluster
19:22 Gill joined #gluster
19:26 neofob joined #gluster
19:37 tdasilva joined #gluster
20:01 lpabon joined #gluster
20:01 jfdoucet JoeJulian, I just tested SplitMount. Wonderful
20:02 jfdoucet Definitely a must havre
20:04 scooby2 joined #gluster
20:18 Rapture joined #gluster
20:19 spalai joined #gluster
20:25 JoeJulian Thanks jfdoucet
20:32 CyrilPeponnet Guys, I try to switch from nfs to gfs but I need the enable-ino32 to be enabled. Sounds like it doesn't work using gfs fuse
20:32 CyrilPeponnet the option is enabled on the volume, it works fine using nfs but as long as I try to use gfs it doesn't work anymore
20:33 ndevos CyrilPeponnet: for the fuse client you need to use the --enable-ino32 option, see "glusterfs --help"
20:34 ndevos CyrilPeponnet: it is a client-side option, the nfs-server is a client too ;-)
20:34 CyrilPeponnet I trieds passing it as mount option with no luck
20:35 ndevos CyrilPeponnet: oh, maybe the /sbin/mount.glusterfs script does not have an option for it?
20:35 ndevos CyrilPeponnet: you should be able to start the "glusterfs" process with the right options manually
20:35 CyrilPeponnet sounds it has according to glusterlogs
20:35 CyrilPeponnet [glusterfsd.c:1959:main] 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.5.2 (/usr/sbin/glusterfs --read-only --enable-ino32 --volfile-s
20:36 CyrilPeponnet maybe it simply doesn't show on mount options once mounted
20:36 ndevos yes, that looks goo
20:36 ndevos d
20:37 ndevos I'm not sure, but I think /proc/mounts can not display non-standard mount options that some fuse processes supports
20:37 JoeJulian correct
20:37 CyrilPeponnet Good to know
20:37 CyrilPeponnet thanks guys
20:38 JoeJulian wait
20:38 JoeJulian no
20:38 CyrilPeponnet yep ?
20:38 JoeJulian nfs.ino32 is for nfs.
20:38 CyrilPeponnet enable-ino32
20:38 JoeJulian nfs.enable-ino32
20:38 JoeJulian yeah
20:38 CyrilPeponnet hum
20:38 CyrilPeponnet so no ino32 for gfs?
20:38 JoeJulian Why do you think you need 32 bit inodes for fuse?
20:39 ndevos nfs.enable-ino32 is for the nfs-server, what --enable-ino32 does for fuse
20:39 CyrilPeponnet Because switching from nfs to gfs, I have issue with binaries
20:39 CyrilPeponnet which are 32 bits on a 64bits system
20:39 JoeJulian https://joejulian.name/blog/br​oken-32bit-apps-on-glusterfs/
20:39 JoeJulian right
20:40 CyrilPeponnet I confirm this option doesn't not work for gfs
20:40 ndevos oh, and /sbin/mount.glusterfs actually should handle a "enable-ino32" mount option
20:40 JoeJulian Ah, right. That didn't exist back then.
20:41 JoeJulian --enable-ino32[=BOOL]  Use 32-bit inodes when mounting to workaround
20:41 JoeJulian broken applicationsthat don't support 64-bit
20:41 JoeJulian inodes
20:41 CyrilPeponnet So I'm screwed ?
20:41 JoeJulian No, just add "enable-ino32" in fstab.
20:41 CyrilPeponnet doesn't work
20:41 ndevos CyrilPeponnet: when you do a "ls -li /mnt/....." does it show 64-bit inodes?
20:42 CyrilPeponnet right
20:42 CyrilPeponnet works
20:42 CyrilPeponnet :p
20:42 CyrilPeponnet The truth is out there
20:42 ndevos hehe
20:45 Forcepoint_ hi guys, i'm having a problem with clients accessing one of my gluster nodes. The logs talk about permission denied errors
20:45 Forcepoint_ for example, [2015-05-15 20:43:39.440511] I [server-rpc-fops.c:699:server_removexattr_cbk] 0-data-server: 30965: REMOVEXATTR *some_path* (d351fd22-7ff1-46a5-848b-ac3583ec4b32) of key  ==> (Permission denied)
20:45 Forcepoint_ or [2015-05-15 20:43:39.464733] E [server-rpc-fops.c:1507:server_open_cbk] 0-data-server: 30972: OPEN *some path* (d351fd22-7ff1-46a5-848b-ac3583ec4b32) ==> (Permission denied)
20:45 Forcepoint_ does anyone have a clue??
20:46 JoeJulian selinux?
20:46 ndevos JoeJulian: indeed, it did not exist when you wrote that post, http://review.gluster.org/3885 introduced it
20:46 JoeJulian I should amend it.
20:47 Forcepoint_ I dont have SELinux enabled..
20:47 Forcepoint_ (nor installed I believe..)
20:47 ndevos Forcepoint_: how are you accessing the volume, over fuse or nfs?
20:48 Forcepoint_ fuse
20:48 JoeJulian Forcepoint_: what filesystem are you using for your bricks?
20:48 Forcepoint_ ext4
20:49 Forcepoint_ I had a lot of problems today with a brick failing, after healing the new brick these errors appear on the server with the new brick
20:49 JoeJulian EPERM: "This type of object does not support extended attributes."
20:49 ndevos Forcepoint_: doe the uid/gid of the system that does the mount match the uid/gid on the bricks?
20:49 Forcepoint_ they are both root, so yes
20:50 JoeJulian doesn't matter, brick service operates on xattrs as root anyway.
20:50 JoeJulian Hrm... is that key blank?
20:51 JoeJulian what version is this?
20:51 ndevos I think there are some permission checks somewhere in the xlator stack... but yes, for root it should not matter anyway
20:51 Forcepoint_ I see no key, indeed.. this is version 3.6.3
20:51 Forcepoint_ on Amazon AMI (CentOS)
20:52 ndevos the xattr error can be a red herring, the OPEN failure looks more serious - but it could be the other way around
20:53 Forcepoint_ yes, i have absolutely no idea..
20:53 Forcepoint_ [2015-05-15 20:52:06.602028] E [server-rpc-fops.c:1507:server_open_cbk] 0-data-server: 61302: OPEN *some path* (a84ffa79-8e0b-4122-83e3-2a31e7b8a941) ==> (Permission denied)
20:53 Forcepoint_ they are really filling up the logs
20:54 joseki joined #gluster
20:54 Forcepoint_ I tried creating a completely new VPS from scratch, replicating but then still these same errors
20:54 joseki sorry for being a broken record, just curious on this one and still haven't seen an answer in the logs: does changing the performance. options require any sort of restart of the volume or daemons?
20:55 JoeJulian I could swear I answered that before. It should not, no.
20:56 joseki Thanks JoeJulian. I'm curious why I don't see more of a performance impact from nearly maxing out the threads parameter
20:56 JoeJulian Forcepoint_: Is that in the client log, or the brick log?
20:56 Forcepoint_ JoeJulian: those errors are in the brick log
20:57 JoeJulian joseki: network bottleneck? io channel bottleneck?
20:58 JoeJulian So wtf is it in the server cbk with nothing at the posix level??? Gah!
20:59 Forcepoint_ :|
21:00 Forcepoint_ This is one of your blog posts: https://joejulian.name/blog/repl​acing-a-brick-on-glusterfs-340/
21:01 Forcepoint_ I replaced the failing hard drive today and followed that post, could it be that mucked something up?
21:01 JoeJulian If I were trying to diagnose it, I would probably enable debug logs and see if there's a clue. If there was none, I might try trace logs. The idea is to backtrace that error through the translation stack to posix.
21:01 JoeJulian That should still be valid.
21:04 JoeJulian hey, ndevos, does bug 991084 need to get re-triaged? This seems like it's low haning fruit.
21:04 glusterbot Bug https://bugzilla.redhat.com​:443/show_bug.cgi?id=991084 high, unspecified, ---, bugs, NEW , No way to start a failed brick when replaced the location with empty folder
21:05 Forcepoint_ hmm ok..
21:06 ndevos JoeJulian: hmm, good point, we should ask if it is still happening with a newer version, because we will not merge patches for 3.4 anymore
21:07 JoeJulian It is.
21:07 ndevos okay, change the version to something most current?
21:07 JoeJulian I can't.
21:07 JoeJulian It's not my bug and I don't have permissions.
21:08 ndevos what version should it be?
21:11 wushudoin joined #gluster
21:11 Forcepoint_ Running glusterd --debug doesn't show any weird stuff
21:12 CyrilPeponnet @JoeJulian I confirm that it doesn't fix the issue
21:13 CyrilPeponnet Inodes are 32bits fine, but getdent sys call is EOVERFLOW (Value too large for defined data type) using gfs
21:14 ndevos CyrilPeponnet: oh, ew, thats bad, you should file a bug for that
21:14 glusterbot https://bugzilla.redhat.com/en​ter_bug.cgi?product=GlusterFS
21:15 ndevos CyrilPeponnet: when you put it like that, I suspect that adding READDIRP to fuse broke it - but I would need to check the details to be sure
21:16 JoeJulian easily tested by disabling readdirp, no?
21:18 ndevos yes, I suppose so - CyrilPeponnet there is an option for the glusterfs client to disable readdirp
21:18 CyrilPeponnet I seee
21:18 CyrilPeponnet performance.force-readdirp
21:18 CyrilPeponnet Default Value: true
21:18 JoeJulian CyrilPeponnet: Add "use-readdirp=off" to fstab
21:18 JoeJulian but...
21:18 JoeJulian hey, why does it say it defaults to off?
21:19 CyrilPeponnet let met try this
21:19 JoeJulian Heh, the text is wrong. It defaults to yes. I should file a bug
21:19 glusterbot https://bugzilla.redhat.com/en​ter_bug.cgi?product=GlusterFS
21:20 Forcepoint_ JoeJulian: strangely enough while running glusterfs --debug there are much less (almost no) of those permission denied errors produced..
21:20 Forcepoint_ I'm stumped
21:21 CyrilPeponnet @JoeJulian you where right
21:21 Forcepoint_ probably a red herring
21:22 CyrilPeponnet @JoeJulian ndevos use-readdirp=off fix the gedent when triggered in 32bits context.
21:22 Forcepoint_ Wait here is a new one: W [dict.c:1156:dict_foreach_match] (--> /usr/lib64/libglusterfs.so.0(_gf_lo​g_callingfn+0x1e0)[0x7f58a9d51550] (--> /usr/lib64/libglusterfs.so.0(dict_f​oreach_match+0xc3)[0x7f58a9d49ec3] (--> /usr/lib64/glusterfs/3.6.3/xlator/features/mark​er.so(marker_getxattr_cbk+0x9d)[0x7f589c37b47d] (--> /usr/lib64/libglusterfs.so.0(default​_getxattr_cbk+0xc2)[0x7f58a9d5e032] (--> /usr/lib64/libglusterfs.so.0(default_getxattr_
21:22 glusterbot Forcepoint_: ('s karma is now -71
21:22 Forcepoint_ cbk+0xc2)[0x7f58a9d5e032] ))))) 0-dict: dict|match|action is NULL
21:22 glusterbot Forcepoint_: ('s karma is now -72
21:22 glusterbot Forcepoint_: ('s karma is now -73
21:22 JoeJulian Forcepoint_: My guess, and I don't have any evidence yet to prove it, is that it has something to do with files that are not healed and that those errors should go away over time.
21:22 glusterbot Forcepoint_: ('s karma is now -74
21:22 glusterbot Forcepoint_: ('s karma is now -75
21:23 * JoeJulian hates the karma regex
21:23 ndevos CyrilPeponnet: file a bug against the fuse component, and include those details, fixing it should be easy
21:23 glusterbot https://bugzilla.redhat.com/en​ter_bug.cgi?product=GlusterFS
21:23 JoeJulian ndevos: I'm already on it.
21:24 ndevos "easy" for a developor, but you want to send a patch, I'd be happy to take it too
21:24 CyrilPeponnet I should send me an award :) I always find tricky bugs
21:24 ndevos developor, that actually should end with "er"
21:24 CyrilPeponnet is readdirp = off have an impact on performance ?
21:24 Forcepoint_ Yes could be it.. "gluster volume heal data info" says nothing to be done though. But I'll keep monitoring it over the next few days then
21:24 CyrilPeponnet exceptially with plenty of files in folders...
21:25 JoeJulian Meh, that's not a tricky bug, just an error in the defaults that are printed. Why that doesn't get read directly from the volume_options struct, I have no clue.
21:25 JoeJulian yes
21:25 CyrilPeponnet So I should stay with nfs for now ?
21:25 ndevos JoeJulian: thats one bug, the conversion of 64-bit inodes to 32-bit ones is an other
21:25 JoeJulian Yes, turning off readdirp will affect performance negatively.
21:26 JoeJulian Oh, well that.
21:26 CyrilPeponnet how bad ?
21:26 JoeJulian It's pretty crappy to read large directories without it.
21:26 JoeJulian Depends on the size of the directory.
21:26 CyrilPeponnet I can test it right now
21:26 ndevos CyrilPeponnet: readdirp combines a standard readdir with the attributes (like stat) for each directory entry, plain  readdir needs to do the additional stat per entry
21:26 JoeJulian You could mount the volume twice. Once for your broken app, and once for everybody else.
21:27 Forcepoint_ JoeJulian: Anyway, thanks for the help so far, appreciate it very much :)
21:27 JoeJulian You're welcome.
21:27 JoeJulian Forcepoint_: Do a "heal...full"
21:27 CyrilPeponnet hard to do... too much dependency related to this volume
21:27 CyrilPeponnet (mount point path I mean)
21:28 JoeJulian Run the broken app in its own docker container.
21:29 JoeJulian Just brainstorming.
21:29 ndevos CyrilPeponnet, JoeJulian: I'll be dropping of now, please make sure there will be 2 bugs when I am back tomorrow or next week :)
21:29 JoeJulian hehe
21:29 CyrilPeponnet don't say broken... it's brand new old tcl test suites....
21:29 CyrilPeponnet :p
21:29 Forcepoint_ JueJulian: ok done
21:30 Forcepoint_ This just shows up: [2015-05-15 21:28:19.478487] I [login.c:82:gf_auth] 0-auth/login: allowed user names: df4af7e2-48e4-4ae5-be98-de8d40338e8f
21:30 Forcepoint_ does that ring a bell maybe?
21:30 JoeJulian CyrilPeponnet: has to file the one about readdirp returning 64 bit inodes with enable-ino32. I don't want to get emails about that one.
21:31 JoeJulian Forcepoint_: Info message. Looks like it's just showing the config option from the vol file.
21:35 Forcepoint_ JoeJulian: I'll be damned, I think you were right about the healing, I just turned all servers off except the new 'problematic' one, and stuff just works
21:42 Forcepoint_ JoeJulian: thanks again, I'm off (now it seems to be working, hah!). have a nice day! :)
21:44 JoeJulian you too
21:47 CyrilPeponnet @JoeJulian #1222150
21:51 PaulCuzner joined #gluster
21:56 glusterbot News from newglusterbugs: [Bug 1222148] usage text is wrong for use-readdirp mount default <https://bugzilla.redhat.co​m/show_bug.cgi?id=1222148>
21:56 glusterbot News from newglusterbugs: [Bug 1222150] readdirp return 64bits inodes even if enable-ino32 is set <https://bugzilla.redhat.co​m/show_bug.cgi?id=1222150>
21:58 CyrilPeponnet @JoeJulian for the record readdirp to off as a minor impact on perf here even on a 3 nodes setup with 4k clients
22:00 lexi2 joined #gluster
22:23 tdasilva joined #gluster
22:31 CyrilPeponnet another thing, I came across this page: http://www.gluster.org/community/documentatio​n/index.php/Translators/performance/io-cache
22:31 CyrilPeponnet quite old
22:31 CyrilPeponnet but
22:31 CyrilPeponnet is io-cache client side enable by default ? can I tweak its values for size and timeout ?
22:55 diegows joined #gluster
22:57 nsoffer joined #gluster
23:28 DV joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary