Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-06-02

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:16 bennyturns joined #gluster
00:26 firemanxbr joined #gluster
00:59 ProT-O-TypE joined #gluster
01:16 bala joined #gluster
01:33 klaas_ joined #gluster
01:50 DV__ joined #gluster
01:53 gildub joined #gluster
02:11 pdrakeweb joined #gluster
02:18 hagarth joined #gluster
02:20 B21956 joined #gluster
02:37 sjm joined #gluster
02:37 sjm left #gluster
02:38 sjm joined #gluster
02:38 harish joined #gluster
02:42 vimal joined #gluster
02:53 recidive joined #gluster
02:53 bala joined #gluster
02:54 jag3773 joined #gluster
02:55 bharata-rao joined #gluster
02:57 athan joined #gluster
02:59 athan Hi everyone, I'm having a little trouble getting started and can't find much help on google. I would like to set up my spare hard drive (at /dev/sdb1, ext4) as the shared drive, but I'm not exactly sure how to supply the parameter to `gluster volume create ...`. Could anyone help me? Thank you in advance!
03:14 Ark athan: you need to make a directory and partition on /dev/sdb1 that you will use for a brick, once you have a brick you can mkak a volume with it. http://gluster.org/community/documentation/index.php/Gluster_3.2:_Creating_Distributed_Volumes
03:14 glusterbot Title: Gluster 3.2: Creating Distributed Volumes - GlusterDocumentation (at gluster.org)
03:14 Ark http://gluster.org/community/documentation/index.php/Gluster_3.2:_Creating_Replicated_Volumes
03:14 glusterbot Title: Gluster 3.2: Creating Replicated Volumes - GlusterDocumentation (at gluster.org)
03:17 athan zero_ark: Thank you :)
03:18 zero_ark if you used lvm to make your OS partitions it should be a breeze, fdisk is not to bad either
03:19 _pol joined #gluster
03:22 MacWinne_ joined #gluster
03:24 kshlm joined #gluster
03:43 itisravi joined #gluster
03:51 nishanth joined #gluster
03:53 shubhendu joined #gluster
04:04 hagarth1 joined #gluster
04:10 vimal joined #gluster
04:12 deepakcs joined #gluster
04:19 velladecin joined #gluster
04:24 kkeithley1 joined #gluster
04:27 ProT-0-TypE joined #gluster
04:29 ProT-0-T_ joined #gluster
04:35 ppai joined #gluster
04:40 kdhananjay joined #gluster
04:48 dusmant joined #gluster
04:50 davinder6 joined #gluster
04:53 psharma joined #gluster
04:54 spandit joined #gluster
05:05 aravindavk joined #gluster
05:05 ramteid joined #gluster
05:08 saurabh joined #gluster
05:12 hagarth joined #gluster
05:14 nbalachandran joined #gluster
05:16 kumar joined #gluster
05:18 oxidane joined #gluster
05:18 ndarshan joined #gluster
05:24 rjoseph joined #gluster
05:25 MacWinne_ joined #gluster
05:27 oxidane left #gluster
05:27 oxidane joined #gluster
05:27 vpshastry joined #gluster
05:32 raghu joined #gluster
05:35 kanagaraj joined #gluster
05:47 aravindavk joined #gluster
05:48 hagarth joined #gluster
05:50 dusmant joined #gluster
05:52 aravindavk joined #gluster
05:54 rjoseph joined #gluster
06:03 lalatenduM joined #gluster
06:10 bala1 joined #gluster
06:14 jcsp1 joined #gluster
06:14 vimal joined #gluster
06:22 rastar joined #gluster
06:24 nshaikh joined #gluster
06:28 glusterbot New news from newglusterbugs: [Bug 959477] nfs-server: stale file handle when attempting to mount directory <https://bugzilla.redhat.com/show_bug.cgi?id=959477>
06:29 aravindavk joined #gluster
06:30 dusmant joined #gluster
06:33 rjoseph joined #gluster
06:38 7F1AASPYQ joined #gluster
06:39 hagarth joined #gluster
06:43 ekuric joined #gluster
06:47 ctria joined #gluster
06:48 meridion joined #gluster
06:51 nishanth joined #gluster
06:57 spandit joined #gluster
06:57 eseyman joined #gluster
06:58 glusterbot New news from newglusterbugs: [Bug 1103577] Dist-geo-rep : geo-rep doesn't log the list of skipped gfid after it failed to process the changelog. <https://bugzilla.redhat.com/show_bug.cgi?id=1103577>
07:02 swebb joined #gluster
07:03 meghanam joined #gluster
07:03 meghanam_ joined #gluster
07:17 nishanth joined #gluster
07:24 keytab joined #gluster
07:26 xavih joined #gluster
07:27 ngoswami joined #gluster
07:28 glusterbot New news from newglusterbugs: [Bug 1103591] snapshot xlator's use of offset in dirent structure causes build failure in NetBSD <https://bugzilla.redhat.com/show_bug.cgi?id=1103591>
07:41 mbukatov joined #gluster
07:47 liquidat joined #gluster
07:47 ProT-0-TypE joined #gluster
07:52 Philambdo joined #gluster
07:58 glusterbot New news from newglusterbugs: [Bug 847821] After disabling NFS the message "0-transport: disconnecting now" keeps appearing in the logs <https://bugzilla.redhat.com/show_bug.cgi?id=847821>
07:59 spandit joined #gluster
08:00 rjoseph joined #gluster
08:02 karimb joined #gluster
08:04 andreask joined #gluster
08:09 ngoswami joined #gluster
08:12 ktosiek joined #gluster
08:14 haomaiwang joined #gluster
08:17 karimb left #gluster
08:24 haomaiwang joined #gluster
08:34 morfair Hi all. Help please. I have gluster volume in Distribute mode with one brick (one server). How to add second brick from another server in Replicated mode?
08:34 qdk_ joined #gluster
08:40 ndevos morfair: something like this should do that: gluster volume add-brick $VOLUME replica 2 $SERVER:/path/to/brick
08:41 morfair ndevos, thanks you
08:42 ndevos morfair: you're welcome
08:43 haomaiwang joined #gluster
08:50 hagarth :O
08:52 spandit joined #gluster
08:55 vpshastry joined #gluster
08:58 glusterbot New news from newglusterbugs: [Bug 1045309] "volfile-max-fetch-attempts" was not deprecated correctl.. <https://bugzilla.redhat.com/show_bug.cgi?id=1045309> || [Bug 1103636] glusterfsd crashed when quota was enabled, then disabled and enabled again <https://bugzilla.redhat.com/show_bug.cgi?id=1103636>
09:14 meghanam joined #gluster
09:14 meghanam_ joined #gluster
09:16 edward1 joined #gluster
09:17 aravindavk joined #gluster
09:18 rjoseph joined #gluster
09:21 haomaiwa_ joined #gluster
09:22 spiekey joined #gluster
09:22 nshaikh joined #gluster
09:22 spiekey Hello!
09:22 glusterbot spiekey: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
09:24 spiekey i am getting: Fatal: unable to get RDMA device list => http://pastebin.com/tEFsD6rQ on CentOS 6.5
09:24 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
09:24 spiekey http://fpaste.org/106428/17010561/
09:24 glusterbot Title: #106428 Fedora Project Pastebin (at fpaste.org)
09:26 hagarth spiekey: are you trying to use rdma?
09:26 spiekey hagarth: until now i did not even know what it is.
09:27 hagarth spiekey: if you are not using rdma, you can remove rdma from glusterd.vol and attempt starting it again.
09:27 spandit joined #gluster
09:29 mbukatov joined #gluster
09:32 spiekey hagarth: i just wonder why it fails. the rpm package and config was done by ovirt/vdsm. but it works on the first node
09:33 spiekey has this something to do with infiniband?
09:34 _pol joined #gluster
09:40 Slashman joined #gluster
09:47 aravindavk joined #gluster
09:48 hagarth spiekey: yes, rdma is related to infiniband.
09:48 monotek joined #gluster
09:48 spiekey ok, i sloved it. it has nothing to do with rdma :)
09:49 spiekey i changed my ips in /etc/hosts and in the file var/lib/glusterd/peers/6b6650c3-5105-45bd-aa54-3619f7fa78fb  was still the old ip
09:50 spiekey does this make sense?
09:52 bharata-rao joined #gluster
09:53 hagarth spiekey: if gluster peer status shows all peers as connected, you are mostly good :)
09:57 harish joined #gluster
09:58 glusterbot New news from newglusterbugs: [Bug 1103643] geo-rep: Changelog agent process goes to zombie when worker process is killed/goes to faulty <https://bugzilla.redhat.com/show_bug.cgi?id=1103643>
10:01 haomaiwa_ joined #gluster
10:03 harish joined #gluster
10:04 nshaikh joined #gluster
10:23 meghanam_ joined #gluster
10:23 meghanam joined #gluster
10:25 haomaiwa_ joined #gluster
10:29 glusterbot New news from newglusterbugs: [Bug 1103665] [SNAPSHOT]: delete only one oldest snapshot <https://bugzilla.redhat.com/show_bug.cgi?id=1103665>
10:31 haomaiwa_ joined #gluster
10:33 spandit joined #gluster
10:37 vpshastry joined #gluster
10:41 ccha2 hello I want to test the new server.manage-gids on 3.5.1 beta
10:42 ccha2 I updated both server from 3.5 to 3.5.1 beta
10:42 ccha2 when I set this option I got this error  Error: Required op_version (4) is not supported
10:45 rastar ccha2, you need to update the clients too
10:46 vpshastry joined #gluster
10:46 ccha2 hum I didn't test on client side yet
10:47 ccha2 I umount this volume on the client
10:48 ccha2 operating-version is 3 in /var/lib/glusterd/glusterd.info
10:48 lalatenduM joined #gluster
10:49 ccha2 rastar: on client side, mount will be nfs for this option
10:49 rastar did you restart processes on server after updating to 3.5.1
10:49 rastar ?
10:49 ccha2 yes
10:50 rastar op_version error indicates that you are trying to use a feature not available/supported by current running versions
10:52 ndevos ccha2: that option sets a configuration item on the bricks and on the clients, you indeed need to update both sides
10:52 ccha2 hum does operating-version in glusterd.info updated when you update glusterfs ?
10:53 ndevos that is something I still have to figure out :)
10:54 ccha2 ndevos: I don't understand about client side
10:54 ccha2 ah this option is not only for nfs ?
10:54 rgustafs joined #gluster
10:55 ndevos ccha2: client-side includes the nfs-server, the nfs-server is a client for the bricks
10:55 ndevos and no, the option is for any client (nfs-server, fuse, libgfapi, .....)
10:55 ccha2 oh ok
10:56 ccha2 I though this option is like nfs.server-aux-gids
10:57 zyxe joined #gluster
10:57 ccha2 but I umount all volume on client
10:57 ndevos it functions similar, but it also causes the client not to send any auxilairy groups
10:57 ccha2 if
10:57 kshlm joined #gluster
10:58 ndevos the clients need to be updated, including a restart of the processes (like the fuse mounts, or gluster-nfs)
10:59 glusterbot New news from newglusterbugs: [Bug 802243] NFS: mount point unresponsive when disks get full <https://bugzilla.redhat.com/show_bug.cgi?id=802243> || [Bug 822361] Lookup of files with gfid's (created from backend) on nfs mount are not force merged <https://bugzilla.redhat.com/show_bug.cgi?id=822361>
10:59 ccha2 the error is "volume set: failed: Staging failed on test02.toto.com. Error: Required op_version (4) is not supported"
10:59 ccha2 that's my 2nd server
11:02 ccha2 I stop glusterfs on my 2nd server,and command set success on 1st server
11:03 ccha2 hum something worng on my 2nd server 0-management: Failed to get handshake ack from remote server
11:04 ndevos so, were both servers updated to the 3.5.1beta version?
11:04 ccha2 # glusterfs --version
11:04 ccha2 glusterfs 3.5.1beta built on May 26 2014 18:38:23
11:06 ndevos hmm
11:12 ccha2 0-management: failed to validate the operating version of peer
11:12 ira joined #gluster
11:12 ccha2 0-management: cannot reduce operating version to 3 from current version 4 as volumes exist
11:13 ccha2 on 1st server /var/lib/glusterd/glusterd.info op version is 4
11:13 ccha2 on 2nd server is still a 3
11:16 ppai joined #gluster
11:19 ndevos I dont know how the op-version in /var/lib/glusterd/glusterd.info is supposed to get updated...
11:22 calum_ joined #gluster
11:27 diegows joined #gluster
11:29 glusterbot New news from newglusterbugs: [Bug 1065654] nfs-utils should be installed as dependency while installing glusterfs-server <https://bugzilla.redhat.com/show_bug.cgi?id=1065654> || [Bug 1092158] The daemon options suggested in /etc/sysconfig/glusterd are not being read by the init script <https://bugzilla.redhat.com/show_bug.cgi?id=1092158>
11:33 hagarth ndevos, ccha2: normally op-version gets updated upon execution of the first volume set command that needs a higher op version.
11:33 hagarth ndevos: the op version gets bumped up only if all servers are capable of handling the higher op version
11:34 ndevos hagarth: ah, okay
11:34 hagarth for options that refer to the client stack, op-versions of connected clients to the volume are also considered.
11:34 ndevos hagarth: so, in case a server is offline, and a volume option is set on the other servers, will the offline server join and update the op-version when it gets back online?
11:35 hagarth ndevos: yes, it should.
11:36 hagarth the server that was offline should be capable of upgrading to that op-version.
11:36 ndevos hagarth: okay, then I wonder why some servers would have a lower op-version than others... any idea what could cause that?
11:36 ccha2 arg I stopped all services of both servers
11:36 ccha2 when I start the 1st server
11:36 ccha2 [2014-06-02 11:35:30.530911] E [glusterd-store.c:1415:glusterd_restore_op_version] 0-management: wrong op-version (4) retrieved
11:36 ccha2 [2014-06-02 11:35:30.530964] E [glusterd-store.c:2655:glusterd_restore] 0-management: Failed to restore op_version
11:36 ccha2 [2014-06-02 11:35:30.530986] E [xlator.c:403:xlator_init] 0-management: Initialization of volume 'management' failed, review your volfile again
11:36 ccha2 [2014-06-02 11:35:30.531001] E [graph.c:307:glusterfs_graph_init] 0-management: initializing translator failed
11:38 spandit joined #gluster
11:38 rjoseph joined #gluster
11:39 vimal joined #gluster
11:42 ndevos semiosis: maybe you can give your input on bugs 764624 and 764063, maybe close them too?
11:45 dusmant joined #gluster
11:47 cvdyoung left #gluster
11:51 hagarth ccha2: can you please check the op-version of the 1st server?
11:52 hagarth s/op-version/glusterfs version/
11:52 glusterbot What hagarth meant to say was: ccha2: can you please check the glusterfs version of the 1st server?
11:52 B21956 joined #gluster
11:53 bala1 joined #gluster
11:53 ccha2 ok, I clean up and remove everything relative to manage-gid and both servers are fine
11:53 hagarth ndevos: upgrades from older versions do not bump up op-version
11:53 hagarth ndevos: if you have a mixture of upgraded nodes and fresh installations in a cluster, then there is a possibility of mixed op-versions in the cluster.
11:54 ccha2 on both server operating-version=3 in glusterd.info
11:54 ndevos hagarth: ah, yes, that makes sense
11:54 hagarth ccha2: i will bbiab, should be able to get you to op-version 4 after the gid option is set
11:54 ccha2 both running as replication and working fine
11:55 vimal joined #gluster
11:56 ccha2 all volume are not mounted by anyclient
11:56 ccha2 volume set: failed: Staging failed on test02.toto.com. Error: Required op_version (4) is not supported
11:57 ccha2 any if I run the set command on th 2nd server
11:57 ccha2 volume set: failed: Staging failed on test01.toto.com. Error: Required op_version (4) is not supported
11:58 ekuric joined #gluster
11:58 ccha2 ==> /var/log/glusterfs/etc-glusterfs-glusterd.vol.log <==
11:58 ccha2 [2014-06-02 11:57:06.912247] E [glusterd-op-sm.c:433:glusterd_op_stage_set_volume] 0-management: Required op_version (4) is not supported
11:58 ccha2 [2014-06-02 11:57:06.912304] E [glusterd-op-sm.c:3886:glusterd_op_ac_stage_op] 0-management: Stage failed on operation 'Volume Set', Status : -1
11:59 glusterbot New news from newglusterbugs: [Bug 1009076] Many warnings about unused results on F19 with current git head <https://bugzilla.redhat.com/show_bug.cgi?id=1009076>
12:00 ccha2 I sucessed previciously when I stop glusterfs on the 2nd server
12:03 edward1 joined #gluster
12:06 itisravi joined #gluster
12:07 andreask joined #gluster
12:10 hchiramm__ joined #gluster
12:10 spandit joined #gluster
12:12 mjsmith2 joined #gluster
12:14 rjoseph joined #gluster
12:16 harish joined #gluster
12:18 ctria joined #gluster
12:22 gildub joined #gluster
12:23 bala1 joined #gluster
12:29 zero_ark joined #gluster
12:43 bala1 joined #gluster
12:45 ccha2 ndevos: ok, I trying manage-gids on 1server
12:46 ccha2 tested with 100 groups and nfs mount, it works
12:47 ccha2 but I got these message if I do loop ls
12:47 ccha2 [2014-06-02 12:44:54.794145] W [nfs-fops.c:62:nfs_fix_groups] 0-nfs-server: too many groups, reducing 102 -> 95
12:47 spiekey joined #gluster
12:49 dusmant joined #gluster
12:52 lalatenduM joined #gluster
12:53 sage__ joined #gluster
13:01 ccha2 ndevos: what is the groups limit for manage-gids
13:02 ccha2 I tested with 1000 groups
13:06 monotek after restarting my whole gluster because of a power failure i have some strange behaviour. evrything seems to work for the clients but i have errors in the logs regarding xattr... these are the logs: http://paste.ubuntu.com/7572887/
13:06 glusterbot Title: Ubuntu Pastebin (at paste.ubuntu.com)
13:06 itisravi_ joined #gluster
13:07 monotek e.g. "0-syscheck-posix: removexattr on /glusterfs/syscheck/testfile (for ): Numerical result out of range"
13:07 ndevos ccha2: I think the max number of groups is 65535
13:08 sroy joined #gluster
13:10 sjm joined #gluster
13:10 ctria joined #gluster
13:11 davinder6 joined #gluster
13:13 Norky joined #gluster
13:15 ccha2 ndevos: added the option, everything is ok. But if I stop glusterfs and kill all process, then I can start gluster again :(
13:20 Tume|Sai joined #gluster
13:20 Tume|Sai hola
13:23 Tume|Sai If anyone could give me some pointers on my setup, would be great. I have 2 node cluster both with 18tb disk which is cached with enhanceio 12xSSD in raid10, connectivity is done with SDR infiniband. For some reason, even with IPoIB I get io errors on writes in my kvm guests. With RDMA the whole thing gives even more IO errors. Running Ubuntu 14.04 and Gluster 3.5
13:24 mjsmith2 joined #gluster
13:25 Tume|Sai two bricks in mirrored volume, io errors don't happen when writing directly to the disk
13:26 hchiramm__ joined #gluster
13:29 glusterbot New news from newglusterbugs: [Bug 1103756] inode lru limit reconfigure option does not actually change the lru list of the inode tabke <https://bugzilla.redhat.com/show_bug.cgi?id=1103756> || [Bug 1098025] Disconnects of peer and brick is logged while snapshot creations were in progress during IO <https://bugzilla.redhat.com/show_bug.cgi?id=1098025>
13:37 _Bryan_ joined #gluster
13:37 ndevos ccha2: I'm not following, you can (not?) start gluster again?
13:38 coredump joined #gluster
13:45 sjm joined #gluster
13:45 mortuar joined #gluster
13:46 ccha2 yes, FAILED
13:46 ccha2 I can't start gluster again
13:47 japuzzo joined #gluster
13:47 ccha2 both server are really with 3.5.1 beta
13:48 ndevos ccha2: what does not start, glusterd or the brick processes?
13:48 ccha2 glusterd
13:49 ndevos and you get that same op-version error as before?
13:49 Thilam joined #gluster
13:51 monotek my system is ubuntu 12.04 with glusterfs 3.4.3 from semiosis ppa. fs is ext4. all volumes i use are distributed replicated.
13:51 monotek after restarting my whole gluster because of a power failure i have some strange behaviour. evrything seems to work for the clients but i have errors in the logs regarding xattr and all new files needs self heal which seems not to work.
13:51 monotek these are the complete logs of client & server while creating 1 new file named "testfile": http://paste.ubuntu.com/7572887/
13:51 glusterbot Title: Ubuntu Pastebin (at paste.ubuntu.com)
13:52 ccha2 ndevos: when I start glusterd, I have these errors in /var/log/glusterfs/etc-glusterfs-glusterd.vol.log
13:53 ccha2 [2014-06-02 13:15:32.338310] E [glusterd-store.c:1415:glusterd_restore_op_version] 0-management: wrong op-version (4) retrieved
13:53 ccha2 [2014-06-02 13:15:32.338337] E [glusterd-store.c:2655:glusterd_restore] 0-management: Failed to restore op_version
13:53 ccha2 [2014-06-02 13:15:32.338359] E [xlator.c:403:xlator_init] 0-management: Initialization of volume 'management' failed, review your volfile again
13:53 ccha2 [2014-06-02 13:15:32.338375] E [graph.c:307:glusterfs_graph_init] 0-management: initializing translator failed
13:53 ndk joined #gluster
13:56 ndevos ccha2: maybe there is an incorrect op-version in one of the /var/lib/glusterd/vols/$VOLUME/info files?
13:58 * ndevos has a 3 in glusterd.info, and op-version of most volumes/info is set to 3 too (except for one, hmm)
13:59 ccha2 since I have 4 volumes, I put only 1 volume with manage-gids
13:59 rgustafs joined #gluster
14:00 ccha2 so I have op-version 4 for this volume and for glusterd.info
14:00 gmcwhistler joined #gluster
14:00 ccha2 others volumes are op-version 2
14:00 sjm joined #gluster
14:06 JoeJulian op-version 1 = latin, op-version 2 = roman, op-version 3 = old spanish, op-version 4 = modern spanish. If you think if it like this, it's pretty clear to see that someone who speaks op-version 2 isn't going to understand all the words that op-version 4 speaks. Similarly, the rpc dialects have progressed to add new rpc calls and/or changd the parameters necessary when calling those rpc calls. They all have to speak the same "language".
14:07 ndevos ccha2: I *think* that should be ok, but my op-version understanding is limited :)
14:08 pdrakeweb joined #gluster
14:08 hchiramm_ joined #gluster
14:10 ccha2 JoeJulian: I didn't set any op-version
14:10 nshaikh joined #gluster
14:11 ndevos JoeJulian: the op-version was invented to provide a solution for doing partial environment updates, iiuc, the op-version of a brick/volume must be <= op-version of the glusterd (and similar for clients)
14:11 ccha2 when I added server.manage-gids the operating-version of glusterd.info was 3 to 4
14:11 Ark joined #gluster
14:11 ccha2 and for the volume 2 to 4
14:11 ndevos ccha2: right, that is what I would expect to happen
14:12 JoeJulian ndevos: yes, but that allows (in my example) someone who speaks castillian to instead speak latin. If you want them all to speak modern spanish, you have to set them all to speak that together.
14:12 ndevos JoeJulian: yes, right
14:13 ccha2 I tried to put op-version 4 for all volumes even the ones without the options
14:13 ccha2 glusterd got same error messages
14:14 ndevos I wonder why glusterd thinks the op-version is wrong...
14:14 ccha2 there is a max and min
14:14 JoeJulian What I wish it would do would be to do a capabilities query and just choose the lowest common. That would then be self-upgrading.
14:14 ccha2 where can I find these value ?
14:14 ccha2 http://www.gluster.org/community/documentation/index.php/Features/Opversion
14:14 glusterbot Title: Features/Opversion - GlusterDocumentation (at www.gluster.org)
14:16 JoeJulian oh, interesting. I should read more... :D
14:17 JoeJulian So you set a feature on one volume that requires op-version 4. Are all the servers in your pool part of that volume?
14:18 ndevos maybe this could be an issue? libglusterfs/src/globals.h:#define GD_OP_VERSION_MAX  3
14:18 ccha2 no I couldn't set at 1st place
14:18 ccha2 glusterd not allow to set and error the 2nd server not good op-version
14:19 ccha2 of I shutdown the 2nd server
14:19 wushudoin joined #gluster
14:19 ccha2 and I could set the option, and op-version on this server go to 4
14:19 ccha2 everything is ok on the server
14:19 JoeJulian monotek: Have you tried running an fsck on the brick that's producing those errors?
14:19 ccha2 but if I stop and start again the server
14:20 ccha2 glusterd can't start
14:20 JoeJulian Ah, got it.
14:20 mortuar joined #gluster
14:20 JoeJulian And they both have the same gluster version?
14:20 ccha2 yes 3.5.1 beta
14:21 ccha2 ccha2 │ glusterd not allow to set and error the 2nd server not good op-version <-- when I use the set command on the 2nd server I got error message wrong op-version for 1st server
14:21 ccha2 so I think 3.5.1 beta1 got max op-version at 3
14:21 ndevos maybe the op-version should be able to handle updates to stable releases too... version 4 seems to be for 3.6+
14:22 ndevos the backport for the server.manage-gids includes a new volume option, that needed an op-version bump too
14:23 chirino joined #gluster
14:23 gmcwhist_ joined #gluster
14:24 JoeJulian Ah. That makes sense.
14:24 JoeJulian now I'm not even sure I like that backported....
14:24 ccha2 any chance to have max op version 4 on 3.5.1 beta2 ?
14:24 ndevos I need to think about it a little more, but a 1-digit op-version is probably not sufficient, we should move that to a 3-digit (or more) number, I guess
14:25 ndevos well, bumping the op-version to 4 will give wrong expectations to other clients/servers, 3.5.x will not support all the new features that are currently in the works for 3.6 (and has op-version=4)
14:26 JoeJulian Precisely what I was thinking.
14:26 lmickh joined #gluster
14:26 Slashman joined #gluster
14:26 ccha2 so manage-gids would not in 3.5.1
14:27 ndevos so, we could move the master branch to op-version=360 and the op-version in 3.5.1 to 351, that should not break too much
14:27 ndevos (only current 3.6 packages that are used for testing will have a problem speaking to 3.5.1)
14:28 jobewan joined #gluster
14:28 plarsen joined #gluster
14:29 ndevos we can pull manage-gids from 3.5.1, but a next bugfix might have a similar issue, so I'm more in favor to increase the op-version and create 'holes' in the numbering for stable versions
14:29 JoeJulian I thought I remembered there being a purpose to keeping the op-version independent from the release version.
14:31 jdarcy joined #gluster
14:31 lmickh joined #gluster
14:32 ndevos sure, its not dependent of the release version, but using a sequential numbering scheme enforces limitations, and in this case, the limitations prevent a usability fix from inclusion
14:32 JoeJulian Alrighty. I'm off. Need to get some coffee somewhere (anyone know of good espresso in Phoenix for a Seattle coffee snob?) and get some breakfast and head over for orientation.
14:32 ndevos any numbering scheme that provides some gaps that can be used in stable branches would be nice
14:33 ndevos JoeJulian: sorry, never been there...
14:33 * ndevos gets a coffee too, but from his own kitchen
14:33 JoeJulian I don't like my coffee... ;)
14:33 JoeJulian I just hope I don't melt down here this week.
14:34 ndevos hehe, ttyl!
14:36 vimal joined #gluster
14:45 recidive joined #gluster
14:49 rjoseph joined #gluster
14:52 sjm joined #gluster
14:52 jag3773 joined #gluster
15:07 ndevos ccha2, JoeJulian: http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/6699 contains my idea on how this might get solved, lets see what the glusterd guys say
15:07 glusterbot Title: Gmane Loom (at thread.gmane.org)
15:11 in joined #gluster
15:19 in joined #gluster
15:22 sjm joined #gluster
15:26 plarsen joined #gluster
15:36 monotek my system is ubuntu 12.04 with glusterfs 3.4.3 from semiosis ppa. fs is ext4. all volumes i use are distributed replicated.
15:36 monotek after restarting my whole gluster because of a power failure i have some strange behaviour. evrything seems to work for the clients but i have errors in the logs regarding xattr and all new files needs self heal which seems not to work.
15:36 monotek these are the complete logs of client & server while creating 1 new file named "testfile": http://paste.ubuntu.com/7572887/
15:36 glusterbot Title: Ubuntu Pastebin (at paste.ubuntu.com)
15:37 aravindavk joined #gluster
15:45 Matthaeus joined #gluster
15:46 hagarth joined #gluster
15:48 rotbeard joined #gluster
15:51 daMaestro joined #gluster
15:53 sjoeboo joined #gluster
15:58 sputnik13 joined #gluster
16:02 glusterbot New news from newglusterbugs: [Bug 1073111] %post install warning for glusterfs-server that it can't find /etc/init.d/glusterfsd (on EL6) <https://bugzilla.redhat.com/show_bug.cgi?id=1073111>
16:04 haomaiwang joined #gluster
16:05 mdavidson joined #gluster
16:11 sjm1 joined #gluster
16:17 hagarth joined #gluster
16:17 jruggiero joined #gluster
16:24 Tume|Sai If anyone could give me some pointers on my setup, would be great. I have 2 node cluster both with 18tb disk which is cached with enhanceio 12xSSD in raid10, connectivity is done with SDR infiniband. For some reason, even with IPoIB I get io errors on writes in my kvm guests. With RDMA the whole thing gives even more IO errors. Running Ubuntu 14.04 and Gluster 3.5
16:25 mdavidson I have been playing around with renaming a gluster volume. What seems to work is: unmount gluster volumes from clients, stop and delete the old volume name, remove the trusted.glusterfs.volume_id and trusted.gfid attrs from the bricks, restart gluster and then reconstruct the volume with the new name. It looks ok at the moment, is there anything that might cause a problem?
16:25 shubhendu joined #gluster
16:26 Mo__ joined #gluster
16:27 rwheeler joined #gluster
16:27 haomaiwa_ joined #gluster
16:35 ramteid joined #gluster
16:39 jbd1 joined #gluster
16:41 Matthaeus joined #gluster
16:41 jag3773 joined #gluster
16:42 ProT-O-TypE joined #gluster
16:43 haomaiwang joined #gluster
16:45 jruggiero left #gluster
16:46 hagarth joined #gluster
16:47 gmcwhis__ joined #gluster
16:47 theron joined #gluster
16:50 aravindavk joined #gluster
16:56 sjm joined #gluster
16:56 Ark joined #gluster
17:03 vpshastry joined #gluster
17:04 mkzero joined #gluster
17:04 sprachgenerator joined #gluster
17:04 jdarcy Wow.  Turns out that FORTIFY_SOURCE (defined by default in Fedora 20) without -O will produce a "valid" set of RPMs containing crashy executables.
17:08 spiekey joined #gluster
17:14 kumar joined #gluster
17:21 Matthaeus joined #gluster
17:26 monotek my system is ubuntu 12.04 with glusterfs 3.4.3 from semiosis ppa.
17:26 monotek after restarting my whole gluster because of a power failure i have some strange behaviour. evrything seems to work for the clients but i have errors in the logs regarding xattr and all new files needs self heal which seems not to work.
17:26 monotek these are the complete logs of client & server while creating 1 new file named "testfile": http://paste.ubuntu.com/7572887/
17:26 glusterbot Title: Ubuntu Pastebin (at paste.ubuntu.com)
17:34 Ark joined #gluster
18:00 theron joined #gluster
18:20 qdk_ joined #gluster
18:28 zerick joined #gluster
18:32 MacWinne_ joined #gluster
18:33 RicardoSSP joined #gluster
18:33 RicardoSSP joined #gluster
18:38 theron joined #gluster
18:39 edward1 joined #gluster
18:43 B21956 joined #gluster
18:44 zerick joined #gluster
18:48 monotek my system is ubuntu 12.04 with glusterfs 3.4.3 from semiosis ppa.
18:48 monotek after restarting my whole gluster because of a power failure i have some strange behaviour. evrything seems to work for the clients but i have errors in the logs regarding xattr and all new files needs self heal which seems not to work.
18:48 monotek these are the complete logs of client & server while creating 1 new file named "testfile": http://paste.ubuntu.com/7572887/
18:48 glusterbot Title: Ubuntu Pastebin (at paste.ubuntu.com)
19:08 Ark joined #gluster
19:10 ctria joined #gluster
19:18 jbd1 joined #gluster
19:20 Ark joined #gluster
19:22 Ark joined #gluster
19:25 sjm joined #gluster
19:30 Matthaeus joined #gluster
19:30 marmalodak joined #gluster
19:41 marmalod1 joined #gluster
20:02 semiosis Ok so how long until someone does a libgfapi binding for Swift?  What will it be called?  gluster-swift???
20:02 semiosis https://developer.apple.com/swift/
20:02 glusterbot Title: Swift - Apple Developer (at developer.apple.com)
20:03 glusterbot New news from newglusterbugs: [Bug 1041109] structure needs cleaning <https://bugzilla.redhat.com/show_bug.cgi?id=1041109>
20:03 andreask joined #gluster
20:09 _pol joined #gluster
20:10 mortuar joined #gluster
20:16 kmai007 joined #gluster
20:16 kmai007 can anybody help explain what this mount flag means for fuse  use-readdirp=no   ?
20:20 semiosis my guess would be it means to not use readdirp
20:25 sjm joined #gluster
20:26 AaronGr joined #gluster
20:31 kmai007 true, so i guess i'll read about readdirp and then think the polar opposite
20:31 jdarcy If it doesn't use readdirp, it has to use plain readdir and then do a separate stat for each file - less efficient.
20:32 kmai007 thanks jdarcy, i attended all your sessions at summit this year
20:32 jdarcy Yay!
20:33 kmai007 so i was trying to follow up on bugzilla
20:33 kmai007 regarding 'stale file handle'
20:33 kmai007 and it said to mount the fuse vol with use-readdirp=no
20:33 kmai007 so i'm just trying to build my understanding of when and when not to use that option
20:34 jdarcy TBH I'm not sure why that would make a difference.
20:34 kmai007 https://bugzilla.redhat.com/show_bug.cgi?id=1041109
20:34 glusterbot Bug 1041109: urgent, unspecified, ---, csaba, NEW , structure needs cleaning
20:35 kmai007 oh by the way i'm using gluster3.4.2 on rhel6.5
20:35 ryant joined #gluster
20:37 jdarcy OK, so it's a bug in FUSE's readdirp implementation.  That makes a bit more sense.
20:51 Mo_ joined #gluster
20:53 theron joined #gluster
21:02 Matthaeus joined #gluster
21:05 sprachgenerator joined #gluster
21:16 Ark joined #gluster
21:28 k3rmat joined #gluster
21:30 ndk joined #gluster
21:39 markd_ joined #gluster
21:45 jcsp joined #gluster
21:57 firemanxbr joined #gluster
22:03 sandy_muncher joined #gluster
22:04 firemanxbr joined #gluster
22:25 MugginsM joined #gluster
22:32 MugginsM joined #gluster
22:33 MugginsM joined #gluster
22:33 lmickh joined #gluster
22:40 plarsen joined #gluster
22:54 avati joined #gluster
22:55 marmalod1 joined #gluster
23:00 a2 joined #gluster
23:38 RicardoSSP joined #gluster
23:38 RicardoSSP joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary