Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-06-02

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:23 Gill joined #gluster
00:48 victori joined #gluster
01:21 PaulCuzner joined #gluster
01:40 Pintomatic joined #gluster
01:41 frankS2 joined #gluster
01:45 harish joined #gluster
01:45 jmcantrell joined #gluster
01:47 kevein joined #gluster
01:52 wushudoin| joined #gluster
01:57 billputer joined #gluster
01:57 wushudoin| joined #gluster
02:07 kedmison joined #gluster
02:13 kedmison I have a volume op-version question.  I'm running gluster 3.6.3 on all servers and clients.  My volume files (/var/lib/glusterd/vol/<volume>/info file shows op-version=3 and client-op-version=3.  I'm also getting messages in client logs like 'Server and Client lk-version numbers are not same, reopening the fds'.  Are these related?  How can I bring the volume op-versions up 3.6.3?
02:13 glusterbot kedmison: This is normal behavior and can safely be ignored.
02:14 kedmison ok, glusterbot; let me break it into two questions so I can see which one you were answering.  have a volume op-version question.  I'm running gluster 3.6.3 on all servers and clients.  My volume files (/var/lib/glusterd/vol/<volume>/info file shows op-version=3 and client-op-version=3. How can I bring the volume op-versions up 3.6.3?
02:14 kedmison Also, I'm also getting messages in client logs like 'Server and Client lk-version numbers are not same, reopening the fds'.  Are these related?
02:14 glusterbot kedmison: This is normal behavior and can safely be ignored.
02:20 nangthang joined #gluster
02:29 jvandewege_ joined #gluster
02:36 al joined #gluster
02:37 adzmely joined #gluster
02:55 jayunit1000 joined #gluster
02:56 doubt joined #gluster
02:57 victori joined #gluster
03:01 kedmison Can anyone help me out with the op-version question?
03:05 kedmison joined #gluster
03:05 kedmison joined #gluster
03:06 kedmison joined #gluster
03:08 [7] joined #gluster
03:09 overclk joined #gluster
03:10 gildub joined #gluster
03:15 doubt joined #gluster
03:16 sripathi joined #gluster
03:16 kedmison1 joined #gluster
03:17 kevein_ joined #gluster
03:27 jayunit1000 joined #gluster
03:41 kanagaraj joined #gluster
03:42 RameshN joined #gluster
03:50 shaunm_ joined #gluster
03:50 atinmu joined #gluster
03:52 rjoseph joined #gluster
03:55 kedmison joined #gluster
03:57 shubhendu joined #gluster
04:06 Gill joined #gluster
04:08 bharata-rao joined #gluster
04:14 kshlm joined #gluster
04:23 ppai joined #gluster
04:25 RameshN joined #gluster
04:26 victori joined #gluster
04:27 spandit joined #gluster
04:31 deepakcs joined #gluster
04:31 soumya joined #gluster
04:34 rjoseph joined #gluster
04:36 cholcombe joined #gluster
04:38 nbalacha joined #gluster
04:42 sakshi joined #gluster
04:43 yazhini joined #gluster
04:48 ramteid joined #gluster
04:53 rafi joined #gluster
04:57 rafi joined #gluster
05:01 jiffin joined #gluster
05:01 spalai joined #gluster
05:08 gem joined #gluster
05:13 kaushal_ joined #gluster
05:13 arcolife joined #gluster
05:14 victori joined #gluster
05:18 lalatenduM joined #gluster
05:18 kotreshhr joined #gluster
05:19 kshlm joined #gluster
05:20 pppp joined #gluster
05:25 schandra joined #gluster
05:25 ashiq joined #gluster
05:25 hgowtham joined #gluster
05:30 R0ok_ joined #gluster
05:31 spalai joined #gluster
05:32 vimal joined #gluster
05:33 glusterbot News from resolvedglusterbugs: [Bug 1009980] Glusterd won't start on Fedora19 <https://bugzilla.redhat.co​m/show_bug.cgi?id=1009980>
05:33 glusterbot News from resolvedglusterbugs: [Bug 1056621] Move hosting to Maven Central <https://bugzilla.redhat.co​m/show_bug.cgi?id=1056621>
05:37 sripathi joined #gluster
05:38 spalai joined #gluster
05:39 hagarth joined #gluster
05:39 Jandre joined #gluster
05:44 poornimag joined #gluster
05:55 spalai joined #gluster
05:56 atalur joined #gluster
05:58 sripathi joined #gluster
06:11 jtux joined #gluster
06:16 Bhaskarakiran joined #gluster
06:18 spalai joined #gluster
06:27 aaronott joined #gluster
06:29 maveric_amitc_ joined #gluster
06:33 glusterbot News from resolvedglusterbugs: [Bug 1223280] [geo-rep]: worker died with "ESTALE" when performed rm -rf on a directory from mount of master volume <https://bugzilla.redhat.co​m/show_bug.cgi?id=1223280>
06:33 glusterbot News from resolvedglusterbugs: [Bug 1223286] [geo-rep]: worker died with "ESTALE" when performed rm -rf on a directory from mount of master volume <https://bugzilla.redhat.co​m/show_bug.cgi?id=1223286>
06:33 glusterbot News from resolvedglusterbugs: [Bug 1224098] [geo-rep]: Even after successful sync, the DATA counter did not reset to 0 <https://bugzilla.redhat.co​m/show_bug.cgi?id=1224098>
06:33 glusterbot News from resolvedglusterbugs: [Bug 1224100] [geo-rep]: Even after successful sync, the DATA counter did not reset to 0 <https://bugzilla.redhat.co​m/show_bug.cgi?id=1224100>
06:33 glusterbot News from resolvedglusterbugs: [Bug 1223642] [geo-rep]: With tarssh the file is created at slave but it doesnt get sync <https://bugzilla.redhat.co​m/show_bug.cgi?id=1223642>
06:33 glusterbot News from resolvedglusterbugs: [Bug 1223644] [geo-rep]: With tarssh the file is created at slave but it doesnt get sync <https://bugzilla.redhat.co​m/show_bug.cgi?id=1223644>
06:33 glusterbot News from resolvedglusterbugs: [Bug 1225542] [geo-rep]: snapshot creation timesout even if geo-replication is in pause/stop/delete state <https://bugzilla.redhat.co​m/show_bug.cgi?id=1225542>
06:33 glusterbot News from resolvedglusterbugs: [Bug 1225543] [geo-rep]: snapshot creation timesout even if geo-replication is in pause/stop/delete state <https://bugzilla.redhat.co​m/show_bug.cgi?id=1225543>
06:33 glusterbot News from resolvedglusterbugs: [Bug 1221544] [Backup]: Unable to create a glusterfind session <https://bugzilla.redhat.co​m/show_bug.cgi?id=1221544>
06:33 glusterbot News from resolvedglusterbugs: [Bug 1225552] [Backup]: Unable to create a glusterfind session <https://bugzilla.redhat.co​m/show_bug.cgi?id=1225552>
06:33 glusterbot News from resolvedglusterbugs: [Bug 1222750] non-root geo-replication session goes to faulty state, when the session is started <https://bugzilla.redhat.co​m/show_bug.cgi?id=1222750>
06:33 glusterbot News from resolvedglusterbugs: [Bug 1223741] non-root geo-replication session goes to faulty state, when the session is started <https://bugzilla.redhat.co​m/show_bug.cgi?id=1223741>
06:36 saurabh_ joined #gluster
06:43 sas_ joined #gluster
06:44 rgustafs joined #gluster
06:45 ppai joined #gluster
06:47 autoditac joined #gluster
06:48 raghu joined #gluster
06:48 atinm joined #gluster
06:53 glusterbot News from newglusterbugs: [Bug 1175711] os.walk() vs scandir.walk() performance <https://bugzilla.redhat.co​m/show_bug.cgi?id=1175711>
06:54 nangthang joined #gluster
06:54 maveric_amitc_ joined #gluster
07:08 HeresJohny joined #gluster
07:11 karnan joined #gluster
07:21 sage joined #gluster
07:23 glusterbot News from newglusterbugs: [Bug 1221941] glusterfsd: bricks crash while executing ls on nfs-ganesha vers=3 <https://bugzilla.redhat.co​m/show_bug.cgi?id=1221941>
07:23 glusterbot News from newglusterbugs: [Bug 1227204] glusterfsd: bricks crash while executing ls on nfs-ganesha vers=3 <https://bugzilla.redhat.co​m/show_bug.cgi?id=1227204>
07:29 Manikandan joined #gluster
07:32 [Enrico] joined #gluster
07:55 Manikandan joined #gluster
08:02 Jandre joined #gluster
08:03 glusterbot News from resolvedglusterbugs: [Bug 1222869] [SELinux] [BVT]: Selinux throws AVC errors while running DHT automation on Rhel6.6 <https://bugzilla.redhat.co​m/show_bug.cgi?id=1222869>
08:03 glusterbot News from resolvedglusterbugs: [Bug 1221470] dHT rebalance: Dict_copy log messages when running rebalance on a dist-rep volume <https://bugzilla.redhat.co​m/show_bug.cgi?id=1221470>
08:03 glusterbot News from resolvedglusterbugs: [Bug 1225331] [geo-rep] stop-all-gluster-processes.sh fails to stop all gluster processes <https://bugzilla.redhat.co​m/show_bug.cgi?id=1225331>
08:03 glusterbot News from resolvedglusterbugs: [Bug 1223215] gluster volume status fails with locking failed error message <https://bugzilla.redhat.co​m/show_bug.cgi?id=1223215>
08:03 glusterbot News from resolvedglusterbugs: [Bug 1224292] peers connected in the middle of a transaction are participating in the transaction <https://bugzilla.redhat.co​m/show_bug.cgi?id=1224292>
08:03 glusterbot News from resolvedglusterbugs: [Bug 1225279] Different client can not execute "for((i=0;i<1000;i++));do ls -al;done" in a same directory at the sametime <https://bugzilla.redhat.co​m/show_bug.cgi?id=1225279>
08:04 glusterbot News from resolvedglusterbugs: [Bug 1225743] [AFR-V2] - afr_final_errno() should treat op_ret > 0 also as success <https://bugzilla.redhat.co​m/show_bug.cgi?id=1225743>
08:04 glusterbot News from resolvedglusterbugs: [Bug 1226029] I/O's hanging on tiered volumes (NFS) <https://bugzilla.redhat.co​m/show_bug.cgi?id=1226029>
08:04 glusterbot News from resolvedglusterbugs: [Bug 1225318] glusterd could crash in remove-brick-status when local remove-brick process has just completed <https://bugzilla.redhat.co​m/show_bug.cgi?id=1225318>
08:04 glusterbot News from resolvedglusterbugs: [Bug 1225919] Log EEXIST errors in DEBUG level in fops MKNOD and MKDIR <https://bugzilla.redhat.co​m/show_bug.cgi?id=1225919>
08:04 glusterbot News from resolvedglusterbugs: [Bug 1225922] Sharding - Skip update of block count and size for directories in readdirp callback <https://bugzilla.redhat.co​m/show_bug.cgi?id=1225922>
08:04 glusterbot News from resolvedglusterbugs: [Bug 1221967] Do not allow detach-tier commands on a non tiered volume <https://bugzilla.redhat.co​m/show_bug.cgi?id=1221967>
08:04 glusterbot News from resolvedglusterbugs: [Bug 1221969] tiering: use sperate log/socket/pid file for tiering <https://bugzilla.redhat.co​m/show_bug.cgi?id=1221969>
08:04 glusterbot News from resolvedglusterbugs: [Bug 1221534] rebalance failed after attaching the tier to the volume. <https://bugzilla.redhat.co​m/show_bug.cgi?id=1221534>
08:04 glusterbot News from resolvedglusterbugs: [Bug 1222198] Fix nfs/mount3.c build warnings reported in Koji <https://bugzilla.redhat.co​m/show_bug.cgi?id=1222198>
08:04 glusterbot News from resolvedglusterbugs: [Bug 1225320] ls command failed with features.read-only on while mounting ec volume. <https://bugzilla.redhat.co​m/show_bug.cgi?id=1225320>
08:04 glusterbot News from resolvedglusterbugs: [Bug 1218863] `ls' on a directory which has files with mismatching gfid's does not list anything <https://bugzilla.redhat.co​m/show_bug.cgi?id=1218863>
08:04 glusterbot News from resolvedglusterbugs: [Bug 1221503] DHT Rebalance : Misleading log messages for linkfiles <https://bugzilla.redhat.co​m/show_bug.cgi?id=1221503>
08:04 glusterbot News from resolvedglusterbugs: [Bug 1224647] [RFE] Provide hourly scrubbing option <https://bugzilla.redhat.co​m/show_bug.cgi?id=1224647>
08:04 glusterbot News from resolvedglusterbugs: [Bug 1224650] SIGNING FAILURE  Error messages  are poping up in the bitd log <https://bugzilla.redhat.co​m/show_bug.cgi?id=1224650>
08:04 glusterbot News from resolvedglusterbugs: [Bug 1225709] [RFE] Move signing trigger mechanism to [f]setxattr() <https://bugzilla.redhat.co​m/show_bug.cgi?id=1225709>
08:04 glusterbot News from resolvedglusterbugs: [Bug 1224894] Quota: spurious failures with quota testcases <https://bugzilla.redhat.co​m/show_bug.cgi?id=1224894>
08:04 glusterbot News from resolvedglusterbugs: [Bug 1225796] Spurious failure in tests/bugs/disperse/bug-1161621.t <https://bugzilla.redhat.co​m/show_bug.cgi?id=1225796>
08:04 glusterbot News from resolvedglusterbugs: [Bug 1221000] detach-tier status emulates like detach-tier stop <https://bugzilla.redhat.co​m/show_bug.cgi?id=1221000>
08:04 glusterbot News from resolvedglusterbugs: [Bug 1221476] Data Tiering:rebalance fails on a tiered volume <https://bugzilla.redhat.co​m/show_bug.cgi?id=1221476>
08:04 glusterbot News from resolvedglusterbugs: [Bug 1226024] cli/tiering:typo errors in tiering <https://bugzilla.redhat.co​m/show_bug.cgi?id=1226024>
08:04 glusterbot News from resolvedglusterbugs: [Bug 1221507] NFS-Ganesha: ACL should not be enabled by default <https://bugzilla.redhat.co​m/show_bug.cgi?id=1221507>
08:04 glusterbot News from resolvedglusterbugs: [Bug 1221477] The tiering feature requires counters. <https://bugzilla.redhat.co​m/show_bug.cgi?id=1221477>
08:04 glusterbot News from resolvedglusterbugs: [Bug 1224241] gfapi: zero size issue in glfs_h_acl_set() <https://bugzilla.redhat.co​m/show_bug.cgi?id=1224241>
08:15 Manikandan joined #gluster
08:20 liquidat joined #gluster
08:21 Jandre_ joined #gluster
08:22 Bardack so basically, if my request on gluster-users is not yet send, its just because there is a huge list of other tickets to be validated before ?
08:22 Bardack so it s normal it takes 1week+ to see it ?
08:23 jtux joined #gluster
08:23 glusterbot News from newglusterbugs: [Bug 1208384] NFS interoperability problem: Gluster Striped-Replicated can't read on vmware esxi 5.x NFS client <https://bugzilla.redhat.co​m/show_bug.cgi?id=1208384>
08:23 glusterbot News from newglusterbugs: [Bug 1218961] snapshot: Can not activate the name provided while creating snaps to do any further access <https://bugzilla.redhat.co​m/show_bug.cgi?id=1218961>
08:23 glusterbot News from newglusterbugs: [Bug 1219399] NFS interoperability problem: Gluster Striped-Replicated can't read on vmware esxi 5.x NFS client <https://bugzilla.redhat.co​m/show_bug.cgi?id=1219399>
08:24 glusterbot News from newglusterbugs: [Bug 1217722] Tracker bug for Logging framework expansion. <https://bugzilla.redhat.co​m/show_bug.cgi?id=1217722>
08:24 bjornar joined #gluster
08:27 Slashman joined #gluster
08:33 glusterbot News from resolvedglusterbugs: [Bug 1226629] bug-973073.t fails spuriously <https://bugzilla.redhat.co​m/show_bug.cgi?id=1226629>
08:34 glusterbot News from resolvedglusterbugs: [Bug 1226120] [Snapshot] Do not run scheduler if ovirt scheduler is running <https://bugzilla.redhat.co​m/show_bug.cgi?id=1226120>
08:34 glusterbot News from resolvedglusterbugs: [Bug 1226139] Implement MKNOD fop in bit-rot. <https://bugzilla.redhat.co​m/show_bug.cgi?id=1226139>
08:34 glusterbot News from resolvedglusterbugs: [Bug 1226153] Quota: Do not allow set/unset  of quota limit in heterogeneous cluster <https://bugzilla.redhat.co​m/show_bug.cgi?id=1226153>
08:34 glusterbot News from resolvedglusterbugs: [Bug 1226032] glusterd crashed on the node when tried to detach a tier after restoring data from the snapshot. <https://bugzilla.redhat.co​m/show_bug.cgi?id=1226032>
08:34 glusterbot News from resolvedglusterbugs: [Bug 1226117] [RFE] Return proper error codes in case of snapshot failure <https://bugzilla.redhat.co​m/show_bug.cgi?id=1226117>
08:34 glusterbot News from resolvedglusterbugs: [Bug 1226853] Volume start fails when glusterfs is source compiled with GCC v5.1.1 <https://bugzilla.redhat.co​m/show_bug.cgi?id=1226853>
08:34 glusterbot News from resolvedglusterbugs: [Bug 1226146] BitRot :- bitd is not signing Objects if more than 3 bricks are present on same node <https://bugzilla.redhat.co​m/show_bug.cgi?id=1226146>
08:34 glusterbot News from resolvedglusterbugs: [Bug 1219955] GlusterFS 3.7.1 tracker <https://bugzilla.redhat.co​m/show_bug.cgi?id=1219955>
08:35 Norky joined #gluster
08:35 harish joined #gluster
08:37 Trefex joined #gluster
08:41 atinmu joined #gluster
08:45 aravindavk joined #gluster
08:47 rgustafs joined #gluster
08:48 nsoffer joined #gluster
08:53 kbyrne joined #gluster
09:08 ProT-0-TypE joined #gluster
09:11 atalur joined #gluster
09:11 The_Ball joined #gluster
09:12 The_Ball Are there caveats to running a two-node gluster setup?
09:19 ctria joined #gluster
09:23 hchiramm joined #gluster
09:29 PaulCuzner joined #gluster
09:38 itisravi joined #gluster
09:40 ju5t joined #gluster
09:48 hagarth joined #gluster
10:02 LebedevRI joined #gluster
10:05 badone_ joined #gluster
10:05 adzmely joined #gluster
10:05 csim_ joined #gluster
10:07 anrao joined #gluster
10:11 csim joined #gluster
10:20 spalai joined #gluster
10:28 Leildin The_Ball, do you mean stuff like, should they be able to communicate ? what are you wanting to do/know ?
10:30 sripathi joined #gluster
10:31 nbalacha joined #gluster
10:31 sripathi joined #gluster
10:36 aaronott joined #gluster
10:40 kkeithley joined #gluster
10:41 The_Ball Leildin, I'm setting up a two-node cluster and was wondering if a three-node cluster is better, for example easier to establish quorum?
10:42 Leildin what kind of volume are you going to make ?
10:43 adzmely joined #gluster
10:45 nbalacha joined #gluster
10:47 ira joined #gluster
10:48 RajeshReddy joined #gluster
10:50 ira joined #gluster
10:50 kshlm joined #gluster
10:58 The_Ball Leildin, a fault tolerant volume for virtual machines running on ovirt nodes
11:06 kkeithley joined #gluster
11:06 rwheeler joined #gluster
11:09 hagarth joined #gluster
11:32 LebedevRI joined #gluster
11:36 spalai joined #gluster
11:37 itisravi The_Ball: For ovirt setups a replica 3 setup is better as it provides better protection against split-brains.
11:40 hagarth joined #gluster
11:41 kkeithley joined #gluster
11:43 ProT-0-TypE joined #gluster
11:55 soumya joined #gluster
11:58 rafi1 joined #gluster
11:58 ndevos REMINDER: Gluster Bug Triage meeting starts in a few minutes in #gluster-meeting
11:59 Trefex joined #gluster
12:10 soumya joined #gluster
12:15 atalur joined #gluster
12:16 poornimag joined #gluster
12:22 kkeithley joined #gluster
12:24 glusterbot News from newglusterbugs: [Bug 1223839] /lib64/libglusterfs.so.0(+0x21725)[0x7f248655a725] ))))) 0-rpc_transport: invalid argument: this <https://bugzilla.redhat.co​m/show_bug.cgi?id=1223839>
12:28 shubhendu joined #gluster
12:33 jcastill1 joined #gluster
12:44 tyrok_laptop2 joined #gluster
12:44 bene2 joined #gluster
12:47 meghanam joined #gluster
12:47 tyrok_laptop2 Hi!  I've had an issue with both 3.7.0 and 3.7.1 on CentOS 7 where if I try to install glusterfs-ganesha, it complains that it needs nfs-ganesha-gluster but has no way to install it.  Any ideas?
12:48 itisravi_ joined #gluster
12:48 wkf joined #gluster
12:48 kkeithley joined #gluster
12:50 jcastillo joined #gluster
12:53 soumya tyrok_laptop2, http://download.gluster.org/pub/gluste​r/glusterfs/nfs-ganesha/CentOS/epel-7/
12:54 Jandre joined #gluster
12:54 soumya tyrok_laptop2, install nfs-ganesha and nfs-ganesha-fsal-gluster
12:54 tyrok_laptop2 soumya: And that works with 3.7?
12:54 soumya tyrok_laptop2, yes
12:54 tyrok_laptop2 soumya: I had seen that, but assumed that since it was a version from October, it was out of date.
12:54 glusterbot News from newglusterbugs: [Bug 1225546] Pass slave volume in geo-rep as read-only <https://bugzilla.redhat.co​m/show_bug.cgi?id=1225546>
12:55 rafi joined #gluster
12:56 soumya ndevos, ^^^ ... does the above link has RPMs of latest nfs-ganesha sources?
12:56 ndevos soumya: I think kkeithley_bat build the ganesha packages, no idea about the current status though
12:57 soumya ndevos, okay..so any link to  refer to latest nfs-ganesha RPMs?
12:58 kkeithley_bat RPMs of ganesha-2.2 are in Fedora and RHEL. But RHEL doesn't have the FSAL_GLUSTER bits
12:58 kkeithley_bat But the RHEL RPMs don't have the FSAL_GLUSTER bits
12:59 ndevos kkeithley_bat: have you built nfs-ganesha for the CentOS Storage SIG yet?
12:59 ndevos kkeithley_bat: I got my built certificate and all, just never tried to use it
12:59 aaronott joined #gluster
13:00 tyrok_laptop2 So is the nfs-ganesha repo still the proper way to get NFSv4 support installed over a Gluster 3.7.1 server, then?
13:01 ndevos tyrok_laptop2: yes, and the only way too
13:02 tyrok_laptop2 Same for pNFS?
13:02 ndevos tyrok_laptop2: well, depends on what "nfs-ganesha repo" you are referring to
13:02 atalur joined #gluster
13:02 tyrok_laptop2 ndevos: http://download.gluster.org/pub/gluste​r/glusterfs/nfs-ganesha/CentOS/epel-7/
13:02 kkeithley_bat ndevos: no, I haven't
13:02 ndevos tyrok_laptop2: jiffin is our pNFS expert for Ganesha, I dont know which version has his patches
13:03 tyrok_laptop2 Basically, I'm looking at 3.7 release notes and seeing NFSv4 and pNFS as prominent features and thinking "Awesome!  I want to try that!" and having difficulty figuring out how.
13:03 mikemol joined #gluster
13:04 ndevos tyrok_laptop2: I think jiffin has documented the steps for pNFS somewhere, he probably sees this conversation soon
13:04 tyrok_laptop2 There's this: https://github.com/gluster/gluster​fs/blob/release-3.7/doc/features/m​ount_gluster_volume_using_pnfs.md
13:04 tyrok_laptop2 But it's not particularly long and seems to be missing some steps.
13:05 glusterbot News from resolvedglusterbugs: [Bug 1226255] Undefined symbol "changelog_select_event" <https://bugzilla.redhat.co​m/show_bug.cgi?id=1226255>
13:05 soumya tyrok_laptop2, pNFS patches got merged too early this year  I guess.. jiffin can confirm
13:05 pppp joined #gluster
13:06 soumya tyrok_laptop2, that means the RPMs in that link may not have those bits yet
13:07 shubhendu joined #gluster
13:07 ndevos tyrok_laptop2: I think the nfs-ganesha packages in Fedora 22 have everything you need, we just need those for CentOS too somewhere
13:07 tyrok_laptop2 If that's the case, a suggestion: remove the glusterfs-ganesha package from the main repo, since it has broken dependencies and is rather confusing.
13:08 jiffin tyrok_laptop2: intial patches for pNFS got merged in march
13:08 jiffin 2015
13:08 tyrok_laptop2 jiffin: Patches to Gluster or to Ganesha?
13:09 jiffin tyrok_laptop2: patches to ganesha
13:09 tyrok_laptop2 Okay...so I would need a version of Ganesha newer than that.  Good to know.
13:09 jiffin tyrok_laptop2: Yes
13:10 mikemol Is a version of Ganesha with those patchse merged release-worthy at this point?
13:10 jiffin tyrok_laptop2: one patch for performance improvement got merged in last week
13:11 jiffin tyrok_laptop2: So it will be better to use latest ganesha rpms
13:11 mikemol So, background. tyrok_laptop2 and I are sitting next to each other.
13:11 tyrok_laptop2 Is there a Ganesha repo for CentOS?
13:12 tyrok_laptop2 Their download page lists individual RPMs for Fedora 20, from the looks of it, but I'm not seeing anything else pre-built.
13:12 kdhananjay joined #gluster
13:12 jiffin tyrok_laptop2: ndevos can help on that
13:15 Bhaskarakiran joined #gluster
13:16 jiffin tyrok_laptop2: If u need any help in configuring pNFS cluster,ping me in irc or just drop a mail to gluster-dev if am not available
13:16 tyrok_laptop2 jiffin: Cool.  Thanks!
13:16 mikemol First thing we'll need are RPMS compatible with Cent7. But yeah, looking forward to it. :)
13:17 julim joined #gluster
13:17 * mikemol joins gluster-dev
13:18 rafi joined #gluster
13:20 jiffin mikemol: hope u guys will find the RPMS pretty soon.
13:20 ndevos tyrok_laptop2: I did build nfs-ganesha-2.2 for CentOS7 - https://copr.fedoraproject.o​rg/coprs/devos/nfs-ganesha/
13:21 dgandhi joined #gluster
13:21 tyrok_laptop2 Joined the mailing list, too.
13:22 jiffin ndevos++ for the build
13:22 glusterbot jiffin: ndevos's karma is now 17
13:23 tyrok_laptop2 Awesome.  Looks like that'd be worth a try.
13:23 mikemol ndevos++. Looks like we should be able to give that a shot.
13:23 glusterbot mikemol: ndevos's karma is now 18
13:24 tyrok_laptop2 Another question: is there any problem with using 3.6 clients to talk to a 3.7 server?
13:25 mikemol Background: While we know you shouldn't generally mix and match, we accidentally mounted 3.7 bricks with 3.6 clients...and it worked.
13:25 Twistedgrim joined #gluster
13:25 tyrok_laptop2 It worked for us when we tried some basic file access, but we figured that was probably not meant to be a supported use case.
13:25 spalai joined #gluster
13:26 georgeh-LT2 joined #gluster
13:29 tyrok_laptop2 ndevos: Awesome!  Looks pretty good and worth a try for us.  Thanks for building that!
13:31 B21956 joined #gluster
13:32 RobertLaptop joined #gluster
13:34 Norky joined #gluster
13:37 ndevos mikemol, tyrok_laptop2: we try to make/keep new servers and old clients work, but it is not very well tested as the same-version-everywhere
13:38 tyrok_laptop2 ndevos: That's about what we figured, but good to know for sure.
13:39 verdurin left #gluster
13:43 bennyturns joined #gluster
13:52 autoditac joined #gluster
14:03 julim joined #gluster
14:07 nsoffer joined #gluster
14:08 ashiq joined #gluster
14:14 arcolife joined #gluster
14:16 ToMiles joined #gluster
14:18 theron joined #gluster
14:20 ToMiles anyone know where to find the 3.7 version of the gluster packages, the ppa only goes up to glusterfs-3.6 for ubuntu trusty
14:26 tyrok_laptop2 ndevos: So, giving this a try, and it seems to be working great so far.  But is there a way to specify the ports that Ganesha uses to serve up NFS?  Would be nice if we could firewall that properly, and our firewall only has support for asking system NFS for ports.
14:37 plarsen joined #gluster
14:38 sripathi joined #gluster
14:40 johnnytran joined #gluster
14:40 Trefex joined #gluster
14:41 shubhendu joined #gluster
14:43 meghanam joined #gluster
14:44 jdarcy joined #gluster
14:47 The_Ball joined #gluster
14:54 rafi joined #gluster
14:55 ira joined #gluster
14:58 RajeshReddy joined #gluster
15:04 rwheeler_ joined #gluster
15:06 RajeshReddy joined #gluster
15:10 soumya joined #gluster
15:14 bene2 joined #gluster
15:16 nbalacha joined #gluster
15:19 jiffin joined #gluster
15:26 RajeshReddy joined #gluster
15:29 spalai left #gluster
15:30 coredump joined #gluster
15:33 bennyturns joined #gluster
15:39 shubhendu joined #gluster
15:44 Trefex joined #gluster
15:50 r22s joined #gluster
15:51 squizzi_ joined #gluster
15:52 kedmison joined #gluster
15:54 r22s Hi all, easy question from a relative beginner.  I have a cluster of several servers with 4 bricks/drives each.  I want to set a replication factor of 2 for a test, but I want each copy to be stored on a different server.  Setting "replication 2" when creating the volume results in the copies being placed on different bricks on the same server.  How can I get them to be stored separately?
15:55 mrEriksson r22s: It is controled by the order you define the bricks when creating the volume
15:56 r22s mrEriksson: Interesting.  I had done "node1:brick0 node1:brick1 node1:brick2 node1:brick3 node2:brick0 [...]"
15:56 mrEriksson So if doing two replicas on four bricks over two servers, it should be something like: srv1:brick1 srv2:brick2 srv1:brick3 srv2:brick4
15:56 ndevos tyrok_laptop2: yes, there are options for Ganesha for that... and there is a #ganesha IRC channel too :-)
15:56 JoeJulian @brick order
15:56 glusterbot JoeJulian: Replicas are defined in the order bricks are listed in the volume create command. So gluster volume create myvol replica 2 server1:/data/brick1 server2:/data/brick1 server3:/data/brick1 server4:/data/brick1 will replicate between server1 and server2 and replicate between server3 and server4.
15:56 bennyturns joined #gluster
15:56 mrEriksson r22s: Hes, there is your problem then :)
15:57 mrEriksson Hes=Yes :-)
15:57 ndevos tyrok_laptop2: the options are listed here: https://github.com/nfs-ganesha/nfs-ganesha/blob​/V2.2-stable/src/config_samples/config.txt#L33
15:57 r22s So interleaving brick definition would solve it?  i.e. "node1:brick0 node2:brick0 node3:brick0 node1:brick1 [...]"?
15:58 mrEriksson Correct
15:58 r22s Awesome.  Thanks for the clarification, it's a big help!
15:58 mrEriksson This is explained in pretty good detail in the docs, so you should have a look and not just take my word for it, I don't really create volumes all that often
15:58 kedmison I have a volume op-version question.  I'm running gluster 3.6.3 on all servers and clients.  My volume files (/var/lib/glusterd/vol/<volume>/info file shows op-version=3 and client-op-version=3. How can I bring the volume op-versions up 3.6.3?
15:59 tyrok_laptop2 ndevos: Ah, cool.  I was looking through the docs posted to a web page somewhere, and didn't see any options for it.  Thanks!
15:59 mrEriksson But the order needs to be correct
15:59 JoeJulian @op-version
15:59 glusterbot JoeJulian: The operating version represents the RPC and translator capabilities required to accommodate the volume settings ( http://gluster.org/community/docume​ntation/index.php/OperatingVersions ). To allow older version clients to connect to newer servers, reset any volume options that require the newer op-version.
15:59 JoeJulian hrm, that's not the one I was looking for
16:03 rafi joined #gluster
16:04 plarsen joined #gluster
16:04 rafi joined #gluster
16:05 JoeJulian kedmison: The op-version should change if you use a feature that needs a higher op-version. If you need to change it manually for some reason, "gluster volume set all cluster.op-version $version"
16:06 kedmison I just did that command and I still see the info file for my volumes explicitly stating op-version=3 and client-op-version=3.
16:07 kedmison Should I manually remove the op-version statements from the info files?
16:07 JoeJulian What problem are you trying to solve?
16:09 bennyturns joined #gluster
16:09 kedmison Good question.  I've got a distribute-only set of bricks (42 bricks) that are on a mix of RAID-1 and RAID-6 volumes.  They also are heterogeneous in size (1TB vs 10TB).  I would like to move all of them to 10TB brick sizes but I need to remove the 1TB bricks one-at-a-time in order to re-layout the 10TB bricks on those servers.
16:11 kedmison I have a ton of small-ish (1MB or less) files on these disks, and any sort of remove-brick or rebalance takes an unbelievably long time.  In addition, I get a lot of errors with clients being unable to delete directories because they appear empty but in fact still have link-files present on some of the bricks.
16:11 cholcombe joined #gluster
16:11 kedmison So I want to get the latest protocols in operation amongst my servers, to try to address at least some of these issues.
16:13 JoeJulian Ah, I see. I don't think the op-version will do anything for you, but let me check something before I say anything absolute.
16:14 lexi2 joined #gluster
16:17 akay1 hmmm i'm running 3.6.2 but my op-version and client-op-version are both set to 2. does that mean im missing out on improvements of 3.6?
16:17 squizzi_ joined #gluster
16:17 JoeJulian There's nothing in the dht translator that says anything about op[-_]version so that should have no impact on rebalance.
16:18 JoeJulian It should just mean you're not using any features that are exclusive to a higher version.
16:20 kedmison so there may be some features I'm missing out on then...  ok thanks.  The best information I've found on this so far is in this bug report https://bugzilla.redhat.com/show_bug.cgi?id=907311  which says that the op-versions are generated dynamically.  If I'm in the habit of upgrading all servers and clients to the latest, should I consider removing the explicit definitions of op-version in the info files and let the dynamic generation do its thing?
16:20 glusterbot Bug 907311: unspecified, unspecified, ---, kaushal, CLOSED CURRENTRELEASE, [Enhancement] Client op-version
16:22 JoeJulian No. Those are state files managed by glusterd. If you enable a feature that needs a higher op-version, the glusterd will handshake, check to see that the feature is between OP_VERSION_MAX and OP_VERSION_MIN for each of them. If it is, it bumps the op-version and sets the feature.
16:23 JoeJulian That's how it's /supposed/ to work. There have been occasions where it doesn't. That's when you need to bump the op-version manually.
16:23 kedmison Ok, then I have a follow-up question.  if all my servers and clients are at release 3.6.3, and the op-version and client-op-version was set to 3 everywhere, why would my bricks be saying
16:25 kedmison (cont'd) Server and Client lk-version numbers are not same, reopening the fds?  Is that the 'version' attribute, not the op-version or client-op-version attribute?
16:25 glusterbot kedmison: This is normal behavior and can safely be ignored.
16:29 baoboa joined #gluster
16:32 JoeJulian The lk-version is not the op-version. When locks are established or removed, the server increments a version counter. If they don't match, then locks need to be played back on whichever one is trailing behind.
16:33 kedmison ok.  good to have that confirmed; thank you.
16:34 JoeJulian I realize none of this actually solves your rebalance problem though. :(
16:34 JoeJulian I don't have a solution for that. I've been raging about rebalance for years.
16:37 RameshN joined #gluster
16:39 kedmison True.  I was also trying the add-brick/remove-brick approach for substituting one brick for another, and it was degenerating into a generalized rebalance rather than a targeted substitution.
16:48 n-st joined #gluster
17:06 hagarth joined #gluster
17:16 victori joined #gluster
17:17 Rapture joined #gluster
17:25 kedmison So, in surfing the code for glusterd-volume-set.c and found the cluster.weighted-rebalance option is targeted for GD_OP_VERSION_3_6_0,
17:25 jbautista- joined #gluster
17:26 kedmison If I understand you correctly JoeJulian, that means that if I set this option on my volume that the op-version should get upgraded on all my volumes to 30600 (since that's what weighted-rebalance needs?
17:26 JoeJulian That's the way I read it, yes.
17:27 JoeJulian That also tells me that my grep was flawed in the dht translator. I searched for "version" not "VERSION"...
17:29 JoeJulian Odd. There's no reference in dht-shared, which accepts that option, to the op-version...
17:31 jbautista- joined #gluster
17:33 jcastill1 joined #gluster
17:36 Vortac Gluster support case insensitive file systems?
17:37 JoeJulian Vortac: I can't think of anything that would block it, as long as that filesystem supports extended attributes.
17:38 jcastillo joined #gluster
17:42 kedmison I see "weighted-rebalance" in dht-shared.c.  So, from my totally not-in-depth analysis, it seems that the options are parsed and tested for version-compatibility in glusterd-volume-set, and then dht-shared tests the "weighted-rebalance" value and sets it into conf->do_weighted, which is then used in dht-selfheal.c
17:48 squizzi_ joined #gluster
17:49 JoeJulian kedmison: If setting that attribute fails, we'll want to check the glusterd logs for errors (" E ") on all the servers near the timestamp of the attempt.
17:50 JoeJulian (this is where logstash/kibana pay off)
17:54 jdarcy joined #gluster
17:56 glusterbot News from newglusterbugs: [Bug 1227469] should not spawn another migration daemon on graph switch <https://bugzilla.redhat.co​m/show_bug.cgi?id=1227469>
18:00 kedmison So, I do see an error wrt. ganesha:
18:00 kedmison Failed to execute script: /var/lib/glusterd/hooks/1/set/post/S31ganesha-set.sh --volname=virusvault -o cluster.weighted-rebalance=on --gd-workdir=/var/lib/glusterd
18:01 kedmison and bizarrely, it happened on all but one of my servers.  I would have expected an all-or-nothing scenario.
18:03 kedmison oh wait, there is some sort of a log rollover issue on the 7th node; it's there but in a log file that's not technically the current one.  i.e.  /var/log/glusterfs/etc-gluste​rfs-glusterd.vol.log-20150601  contains 2015-06-02 log messages.
18:08 kedmison but the /var/lib/glusterd/vol/<volume>/info now shows op-version 30600 and the gluster volume info <volume> also shows cluster.weighted-rebalance: on
18:17 adzmely joined #gluster
18:19 jiffin joined #gluster
18:23 soumya joined #gluster
18:33 nsoffer joined #gluster
18:44 jmarley joined #gluster
18:44 jmarley joined #gluster
18:56 The_Ball itisrav: "For ovirt setups a replica 3 setup is better as it provides better protection against split-brains." Does this still apply when one has a proper fence STONITH device in place?
18:57 anrao joined #gluster
18:59 rotbeard joined #gluster
19:06 lexi2 joined #gluster
19:08 ToMiles joined #gluster
19:09 ProT-0-TypE joined #gluster
19:18 adzmely joined #gluster
19:25 madphoenix joined #gluster
19:26 madphoenix anybody know about a workaround for this? https://bugzilla.redhat.co​m/show_bug.cgi?id=1168897.  i have a bad brick that needs to be removed, but this is blocking me.  it's not clear from the ticket whether overriding that value in glusterd.info will work
19:26 glusterbot Bug 1168897: medium, medium, ---, bugs, NEW , Attempt remove-brick after node has terminated in cluster gives error: volume remove-brick commit force: failed: One or more nodes do not support the required op-version. Cluster op-version must atleast be 30600.
19:27 madphoenix there is another bug (1127328) which shows closed duplicate of 1109742, but 1109742 isn't public
19:36 glusterbot News from resolvedglusterbugs: [Bug 1220047] Data Tiering:3.7.0:data loss:detach-tier not flushing data to cold-tier <https://bugzilla.redhat.co​m/show_bug.cgi?id=1220047>
19:48 theron joined #gluster
19:57 John_HPC joined #gluster
19:57 John_HPC Quick question. Am I missing something, or is there a command/file/something that says how many objects exists in your volume?
20:01 rotbeard joined #gluster
20:03 ndevos John_HPC: you are not the only one missing that feature, it is being planned, not sure how much progress there has been lately
20:03 ndevos http://www.gluster.org/community/docume​ntation/index.php/Features/Object_Count for some details, email the owners with the lists on CC for more details
20:08 John_HPC ndevos: thanks. So far my rebalance has scanned 80million files and I'm scared to see the overall total :P
20:08 ndevos John_HPC: oh, wow!
20:19 adzmely joined #gluster
20:22 badone_ joined #gluster
20:26 DV joined #gluster
21:06 [7] so how do I migrate a brick to a different server?
21:06 [7] the instructions at https://github.com/gluster/glusterfs/blob​/master/doc/admin-guide/en-US/markdown/ad​min_managing_volumes.md#migrating-volumes don't seem to work
21:07 [7] apparently these commands don't exist anymore in current gluster versions
21:07 [7] (I'm talking about a distribute-replicate volume)
21:11 TheCthulhu joined #gluster
21:19 adzmely joined #gluster
21:20 squizzi_ joined #gluster
21:27 wkf joined #gluster
21:38 badone_ joined #gluster
21:41 jdarcy left #gluster
22:13 jmarley joined #gluster
22:15 adzmely joined #gluster
22:22 Philambdo joined #gluster
22:25 purpleidea joined #gluster
22:25 purpleidea joined #gluster
22:31 Philambdo joined #gluster
22:33 adzmely joined #gluster
22:34 twisted` joined #gluster
22:53 diegows joined #gluster
23:03 [7] anyone?
23:16 JoeJulian [7]: That's the documentation from the current master. What problem are you having?
23:20 JoeJulian [7]: ?
23:21 [7] I only get this output: "Usage: volume replace-brick <VOLNAME> <SOURCE-BRICK> <NEW-BRICK> {commit force}"
23:22 [7] command was: gluster volume replace-brick ove host1:/storage/gluster/brick01/ove host2:/storage/gluster/brick01/ove start
23:22 JoeJulian What version are you running?
23:22 [7] 3.7.1
23:23 [7] which is what oVirt ships with
23:23 JoeJulian [7]: oVirt ships its own distro?
23:23 [7] it does, but I'm just talking about the centos repository here
23:30 JoeJulian Nice updating of the documentation... <grumble> Looks like they took out start.
23:31 JoeJulian Well, if it's replicated, just commit the replace and self-heal will populate the new brick.
23:31 JoeJulian I'll harass someone about not updating the documentation.
23:33 aaronott joined #gluster
23:33 TheCthulhu Another quick question: Is there any problems in relocating the skeleton home directory and user home directories, excluding root of course, to a gluster volume?
23:34 TheCthulhu Or any tradeoffs as such I should be familiar with
23:35 JoeJulian Just the normal overhead involved with self-heal checks.
23:35 JoeJulian I ran user homes on gluster with no problems.
23:35 [7] JoeJulian: well, self-heal doesn't really look like a good solution here, for several reasons
23:36 [7] I did attempt it though, and didn't manage to do it either
23:36 [7] # gluster volume replace-brick ove host1:/storage/gluster/brick01/ove host2:/storage/gluster/brick01/ove commit force
23:36 [7] volume replace-brick: failed: brick: host1:/storage/gluster/brick01/ove does not exist in volume: ove
23:37 plarsen joined #gluster
23:37 [7] however: "gluster volume info ove" shows "Brick2: host1:/storage/gluster/brick01/ove"
23:38 gildub joined #gluster
23:38 Prilly joined #gluster
23:40 [7] JoeJulian: fyi, looks like replace-brick start was deprecated ~3 years ago, with objections from Jon Archer, which are basically what I'm concerned about as well
23:41 JoeJulian Objections from me as well.
23:41 JoeJulian This is why we need a community board.
23:42 [7] so discarding a good replica is basically the only way to build rebuild it somewhere else, losing redundancy during the process?
23:42 JoeJulian yep
23:42 [7] very nasty.
23:42 JoeJulian I agree
23:42 [7] don't even want to imagine what that means for quorum situations etc.
23:43 JoeJulian Interesting thought. I haven't tried that myself.
23:44 [7] while we're at it, I haven't fully understood how adding new bricks works. that basically modifies the dht hashing in a way that moves almost everything around, right?
23:45 [7] how do clients know what is where while that stuff is moving? do files appear to be temporarily missing during that process because the clients just don't know where they are?
23:45 JoeJulian Almost. Jeff Darcy presented a more efficient way of doing it a few years ago, but never was allocated time to implement it.
23:46 JoeJulian I'm not entirely sure how clients know. I know they do know though.
23:46 [7] ok... otherwise that would have been a killer for any kind of enterprise scaleout filesystem use
23:46 JoeJulian i know, right?
23:46 [7] that brick rebuild thing is bad enough already
23:47 [7] I'm seriously considering whether I'll dare to go with gluster for a project or rather resort to DRBD + gfs2 or whatever, hoping that I don't have to scale beyond 2-3 nodes
23:48 [7] ceph doesn't really seem ready either
23:48 JoeJulian http://permalink.gmane.org/gmane.co​mp.file-systems.gluster.devel/11247
23:49 [7] thanks ;)
23:49 JoeJulian Dude, use drbd. I dare you.
23:49 JoeJulian Nothing makes people happier than coming here from losing everything there.
23:51 JoeJulian ceph is ready for larger clusters of block devices where performance is not an issue.
23:51 JoeJulian Maintenance and installation is a pain, though.
23:51 [7] yeah, but I'm looking mostly for samba fileservers here, not so much for block storage
23:53 JoeJulian I would use 3.6.3, not 3.7. oVirt doesn't actually ship with glusterfs, afaict, so I'm not sure how you got there.
23:53 JoeJulian and 3.6.3 has the start command still.
23:53 JoeJulian I probably will never recommand 3.7 because of that loss.
23:53 [7] I'm looking into ovirt 3.6 here
23:54 [7] which is alpha right now
23:54 [7] I'm testing for a project that will likely launch by the end of this year
23:54 [7] so I guess it should be sufficiently stable by that time
23:54 JoeJulian yes
23:54 [7] (the VM management layer also isn't quite as important as the backend storage of course)
23:56 [7] any idea why my "replace-brick commit force" above failed?
23:57 JoeJulian No clue. I'd check the glusterd logs. I suspect the error is in one of them.
23:58 [7] "glusterd logs" are which ones? etc-glusterfs-glusterd.vol.log? cli.log?
23:59 [7] hm, etc-glusterfs-glusterd.vol.log suggests a name resolution problem
23:59 [7] I'll look into that

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary