Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-03-08

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:13 xiu d/b 22
00:26 topshare joined #gluster
02:25 bala joined #gluster
03:57 kudude joined #gluster
03:58 kudude hello everyone, im going thru the following tutorial http://www.gluster.org/community/documentation/index.php/Getting_started_rrqsg and i keep getting the following error 'volume create: testvol: failed: Staging failed on glusterfs2. Error: parent directory /export/sdb1 is already part of a volume'
03:58 kudude any ideas?
03:59 kudude i'm running 2 centos 6.6 vms
04:12 kudude anyone?
04:18 Folken_ kudude: you already have gluster data in /export/sdb1
04:33 gem joined #gluster
05:22 haomaiwang joined #gluster
05:23 haomaiw__ joined #gluster
06:04 bala joined #gluster
06:24 maveric_amitc_ joined #gluster
06:34 JoeJulian joined #gluster
06:44 Apeksha joined #gluster
07:59 LebedevRI joined #gluster
08:06 Debloper joined #gluster
08:23 kovshenin joined #gluster
08:24 Pupeno joined #gluster
09:00 chris| left #gluster
09:12 kovshenin joined #gluster
09:36 topshare joined #gluster
09:54 bala joined #gluster
10:01 topshare joined #gluster
10:05 julim joined #gluster
10:14 badone joined #gluster
10:29 kovshenin joined #gluster
10:44 DV joined #gluster
10:51 Prilly joined #gluster
10:53 Prilly I did some reading on Gluster FS, i have to say it sounds to good to be true. how stable is Gluster?
10:56 misc quite stable, it is used in production around in the world
10:58 Folken_ Prilly: what are you looking at using it for?
11:04 DV joined #gluster
11:09 Prilly Folken_: iam wanting to replace it as storage backend for security cameras
11:10 Prilly anyone have tried to use gluster with iSCSI as direct backend for XEN?
11:10 Prilly Gluster has NFS, anyone tried mounting it as storage for xenserver?
11:11 misc not using xen, sorry
11:12 Prilly how is the random read/write for NFS?
11:15 misc mhh, working I guess ? read/write on the same file ?
11:20 Dave2 joined #gluster
11:26 Prilly misc: filelocking is ok i guess, i just read that there is different performances on the native client vs nfs for larg files and smal/random access, in nature NFS would be better then native client with iscsi
11:26 vimal joined #gluster
11:27 DV joined #gluster
11:27 Prilly so how are you guys setting up bricks? iam thinking you are using raid (5 - 6) to form a voulm then represent it as a brick?
11:34 Folken_ Prilly: I have 3 bricks each running mdadm raid5, I then run glusterfs over the top using a disperse volume
11:37 Prilly Folken_: yes that is the same setup iam thinking to use, with dist replicate 2 and 4 nodes to start, how is this configuration performing?
11:38 Prilly are you seeing better performance in any ways then the single raid can provide?
11:48 DV joined #gluster
11:53 topshare joined #gluster
12:13 Folken_ Prilly: sadly my network is only gigabit, but it's working well thus far
12:16 afics joined #gluster
12:18 ulizum joined #gluster
12:18 ulizum hi
12:18 glusterbot ulizum: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
12:20 ulizum i am trying to upgrade from 3.5 to 3.6 on Debian wheezy and somehow have issues with the generate-gfid-file.sh script, i am using the following command:
12:21 ulizum sudo bash generate-gfid-file.sh localhost:data $PWD/get-gfid.sh /tmp/master_gfid_file.txt
12:21 ulizum and get this as output:
12:21 ulizum generate-gfid-file.sh: line 27: fatal: command not found
12:21 ulizum .: glusterfs.gfid.string: Operation not supported
12:21 ulizum /usr/share/glusterfs/scripts
12:21 ulizum umount: /tmp/tmp.VIMvbNr7ad: not mounted
12:21 ulizum generate-gfid-file.sh: line 35: fatal: command not found
12:21 ulizum any ideas?
12:24 ulizum btw: this is for geo replication and I am following this documentation: http://www.gluster.org/community/documentation/index.php/Upgrade_to_3.6#Upgrade_steps_for_geo_replication:
13:14 kovshenin joined #gluster
13:18 overclk joined #gluster
13:52 ulizum forget it I found out that my issue was related with this bug: https://bugzilla.redhat.com/show_bug.cgi?id=1191176
13:52 glusterbot Bug 1191176: urgent, unspecified, ---, bugs, NEW , Since 3.6.2: failed to get the 'volume file' from server
14:26 pelox joined #gluster
14:31 RC123 joined #gluster
14:32 RC123 hi, am getting below erorr when  run   any "gluster set " cmd
14:32 RC123 volume set: failed: One or more connected clients cannot support the feature being set. These clients need to be upgraded or disconnected before running this command again
14:32 Folken_ are all your bricks running the same version of gluster
14:32 RC123 version ->  glusterfs 3.5.0
14:34 RC123 gluster server - >  3.5.0
14:34 RC123 client -> 3.4.2
14:35 RC123 any way to  fix without upgrading client/server ?
14:37 elico joined #gluster
14:38 Folken_ not that I'm aware
14:38 bala joined #gluster
14:48 rotbeard joined #gluster
14:54 plarsen joined #gluster
15:13 Bhaskarakiran joined #gluster
15:15 gem joined #gluster
15:21 T0aD joined #gluster
15:41 nangthang joined #gluster
15:51 ghenry joined #gluster
15:51 ghenry joined #gluster
16:34 side_control joined #gluster
16:43 kudude joined #gluster
16:45 elico left #gluster
16:52 kudude hello everyone, i'm going though this tutorial http://www.gluster.org/community/documentation/index.php/Getting_started_rrqsg but i keep getting this error when i try and create volume 'volume create: testvol: failed: Staging failed on glusterfs2. Error: parent directory /export/sdb1 is already part of a volume'
16:53 kudude im using centos 6.6 and 2 VMs running on ESXi 5.5
16:53 kudude this is a brand new built, starting from scratch.  no gluster info on these servers,
16:54 kudude anyone have any ideas?  google didnt really help
16:55 hagarth kudude: remove all extended attributes pertaining to glusterfs in /export/sdb1 on all nodes. that could help here.
16:55 kudude sorry i'm a newb to gluster, how do i dot that?
16:55 hagarth list all extended attributes using attr -l /export/sdb1
16:56 hagarth setfattr -x <glusterfs-attributes> /export/sdb1/
16:56 kudude [root@glusterfs1 ~]# attr -l /export/sdb1 [root@glusterfs1 ~]#
16:56 kudude found nothing
16:56 hagarth attr -l /export/sdb1/brick1 ?
16:57 hagarth or your appropriate brick directory
16:57 kudude Attribute "glusterfs.volume-id" has a 16 byte value for /export/sdb1/brick/
16:57 hagarth do remove that attribute using setfattr -x
16:58 hagarth ensure that you do it on both VMs before issuing volume create
16:58 kudude 'setfattr -x /export/sdb1/brick/' right?
16:59 hagarth setfattr -x "trusted.glusterfs.volume-id" /export/sdb1/brick
16:59 theron joined #gluster
17:00 kudude nope, same error, i removed the attribute on node1 and node2 didnt have it
17:01 kudude reran
17:01 hagarth can you check the logs of glusterd from both nodes?
17:01 kudude ' gluster volume create testvol rep 2 transport tcp glusterfs1.dbz.home:/export/sdb1/brick glusterfs2.dbz.home:/export/sdb1/brick
17:01 kudude which log?
17:02 kudude cli.log?
17:02 kudude glusterfs1 and 2 are my nodes
17:02 kudude dbz.home is my internal domain
17:03 hagarth kudude: /var/log/glusterfs/etc-glusterfs...
17:04 kudude [2015-03-08 17:00:17.166648] E [glusterd-syncop.c:105:gd_collate_errors] 0-: Staging failed on glusterfs2. Error: parent directory /export/sdb1 is already part of a volume
17:05 kudude and this was on my 2nd node
17:05 kudude [2015-03-08 17:00:17.693486] E [glusterd-utils.c:8112:glusterd_is_path_in_use] 0-management: parent directory /export/sdb1 is already part of a volume [2015-03-08 17:00:17.693611] E [glusterd-op-sm.c:4539:glusterd_op_ac_stage_op] 0-management: Stage failed on operation 'Volume Create', Status : -1
17:06 hagarth kudude: doesn't attr -l /export/sdb1 list anything on your second node?
17:06 kudude nothing
17:06 kudude [root@glusterfs2 glusterfs]# attr -l /export/sdb1/brick/
17:06 kudude shows nothing
17:07 hagarth kudude: just /export/sdb1 without the brick directory?
17:20 kudude oh
17:20 kudude ahh
17:20 kudude Attribute "gfid" has a 16 byte value for /export/sdb1
17:20 kudude Attribute "glusterfs.volume-id" has a 16 byte value for /export/sdb1'
17:21 kudude remove both?
17:31 hagarth yes, please remove both with the trusted prefix
17:32 hagarth and do that on both nodes before attempting a volume create again
18:08 the-me joined #gluster
18:11 ulizum stupid question but is it safe to run 3.6.2 in production yet?
18:18 lalatenduM joined #gluster
18:26 Philambdo joined #gluster
18:27 Debloper joined #gluster
18:29 lalatenduM__ joined #gluster
19:11 jcarter2_ joined #gluster
20:28 DV joined #gluster
21:57 theron joined #gluster
22:08 huleboer joined #gluster
22:14 badone joined #gluster
22:25 luis_silva joined #gluster
22:30 ulizum left #gluster
22:33 diegows joined #gluster
22:41 msmith_ joined #gluster
22:58 theron joined #gluster
23:04 Prilly joined #gluster
23:08 plarsen joined #gluster
23:14 sankarshan joined #gluster
23:15 ghenry joined #gluster
23:29 mbelaninja joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary