Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2013-12-07

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:17 _pol joined #gluster
00:22 johnbot11 joined #gluster
00:25 gmcwhistler joined #gluster
00:32 badone joined #gluster
00:33 mattappe_ joined #gluster
00:48 Ge3 joined #gluster
00:48 Ge3 hello! On Ubuntu 13.10 and Gluster 3.4.1 installed from the latest packages, I cant create any volumes
00:49 Ge3 gluster volume create test03 replica 2 transport tcp gluster1:/foo/b gluster2:/foo/b
00:49 Ge3 volume create: test03: failed
00:49 Ge3 this is the only log entry:
00:49 Ge3 [2013-12-07 00:47:39.928301] W [rpc-transport.c:175:rpc_transport_load] 0-rpc-transport: missing 'option transport-type'. defaulting to "socket"
00:49 Ge3 [2013-12-07 00:47:39.929122] I [socket.c:3480:socket_init] 0-glusterfs: SSL support is NOT enabled
00:49 Ge3 [2013-12-07 00:47:39.929199] I [socket.c:3495:socket_init] 0-glusterfs: using system polling thread
00:49 Ge3 [2013-12-07 00:47:39.979599] I [cli-cmd-volume.c:387:cli_cmd_volume_create_cbk] 0-cli: Replicate cluster type found. Checking brick order.
00:49 Ge3 [2013-12-07 00:47:39.979969] I [cli-cmd-volume.c:304:cli_cmd_check_brick_order] 0-cli: Brick order okay
00:50 Ge3 volume create: test03: failed[2013-12-07 00:47:39.988896] I [cli-rpc-ops.c:805:gf_cli_create_volume_cbk] 0-cli: Received resp to create volume
00:50 Ge3 [2013-12-07 00:47:39.989498] I [input.c:36:cli_batch] 0-: Exiting with: -1
00:51 Ge3 there seems to be a similar report in ml http://www.gluster.org/pipermail/glu​ster-users/2013-December/038141.html
00:51 glusterbot Title: [Gluster-users] Catn create volume (at www.gluster.org)
00:51 Ge3 but it wasnt solved
00:51 hchiramm_ joined #gluster
00:54 Ge3 ill pay 50 usd for help
00:57 Ge3 no?
00:57 skered- Now we're talking
00:58 Ge3 so help me:)
01:00 Ge3 im running gluster under kvm
01:04 Cenbe joined #gluster
01:05 Ge3 ok, 100 USD ?
01:05 Ge3 for helping me
01:08 Ge3 no?
01:12 Ge3 200 usd?
01:18 skered- What does 'gluster peer status' return?
01:47 _pol joined #gluster
01:58 mattapp__ joined #gluster
02:04 mattap___ joined #gluster
02:45 MrNaviPacho joined #gluster
02:54 psyl0n joined #gluster
04:12 semiosis @qa releases
04:12 glusterbot semiosis: The QA releases are available at http://bits.gluster.com/pub/gluster/glusterfs/ -- RPMs in the version folders and source archives for all versions under src/
04:12 semiosis @forget qa releases
04:12 glusterbot semiosis: The operation succeeded.
04:13 semiosis @learn qa releases as QA releases are now available here: http://goo.gl/c1f0UD -- the old QA release site is here: http://goo.gl/V288t3
04:13 glusterbot semiosis: The operation succeeded.
04:14 semiosis johnmark: debian wheezy packages of 3.5.0qa3 are now published
04:14 semiosis http://download.gluster.org/pub/gluster​/glusterfs/qa-releases/3.5.0qa3/Debian/
04:14 glusterbot Title: Index of /pub/gluster/glusterfs/qa-releases/3.5.0qa3/Debian (at download.gluster.org)
04:15 * semiosis out
04:53 davinder joined #gluster
05:25 gdubreui joined #gluster
05:50 gdubreui joined #gluster
05:54 hagarth joined #gluster
05:56 hchiramm_ joined #gluster
06:25 Spiculum joined #gluster
06:41 DV__ joined #gluster
08:05 ngoswami joined #gluster
08:06 ricky-ti1 joined #gluster
08:52 hchiramm_ joined #gluster
08:57 geewiz joined #gluster
09:05 samppah about object storage.. glusterfs object storge is compatible with openstack swift which is compatible with amazon s3 api?
09:34 rotbeard joined #gluster
09:47 hagarth samppah: gluster's object storage uses the opnestack swift api
09:47 hagarth and there is a middleware in swift that provides compatibility with amazon s3 api
09:49 samppah hagarth: ok, thanks. that sounds good.
09:49 impure_love joined #gluster
09:49 impure_love hello!
09:49 glusterbot impure_love: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
09:49 impure_love Status: Brick is Not connected
09:50 impure_love anyone knows how do I trigger a brick reconnection without stopping and starting the volume?
09:50 impure_love the peer is connected
09:51 samppah impure_love: what version is this?
09:54 impure_love samppah: 3.4.1
09:55 samppah clients should reconnect to bricks automatically..
09:56 samppah service glusterd restart should start needed glusterfs processes if they are not running for any reason
09:56 samppah impure_love: what kind of setup you have, can you send output of gluster volume info and gluster volume status to pastie.org?
09:58 hchiramm_ joined #gluster
10:38 calum_ joined #gluster
10:59 Gr6 joined #gluster
11:00 Gr6 hello. I have a problem with Ubuntu 13.10 and Gluster 3.4.1 installed from the packages. Cluster volume create fails.
11:00 Gr6 gluster volume create test03 replica 2 transport tcp
11:00 Gr6 >>> GlusterFS03:/home/test03 GlusterFS02:/home/test03
11:00 Gr6 >>> volume create: test03: failed
11:01 Gr6 similar issue as reported in the mailing list
11:01 Gr6 nothing in the logs
11:01 samppah Gr6: nothing in glusterd.log?
11:02 samppah is glusterd running?
11:02 Gr6 samppah: yes
11:02 Gr6 ps -ef|grep glusterd
11:02 Gr6 root       952     1  0 02:42 ?        00:00:01 /usr/sbin/glusterd -p /var/run/glusterd.pid
11:02 Gr6 gluster volume create test03 replica 2 transport tcp gluster1:/foo/b gluster2:/foo/b
11:02 Gr6 volume create: test03: failed
11:03 Gr6 i've used gluster on ubuntu 13.04 and same version without issues
11:03 samppah can you send /var/log/glusterfs/etc-glusterfs-glusterd.vol.log to pastie.org
11:03 samppah oh, ok
11:04 Gr6 ok, i can
11:04 Gr6 http://pastie.org/8535279
11:04 glusterbot Title: #8535279 - Pastie (at pastie.org)
11:05 Gr6 http://pastie.org/8535281
11:05 glusterbot Title: #8535281 - Pastie (at pastie.org)
11:05 Gr6 thats scli.log
11:05 Gr6 cli.log
11:05 Gr6 cli.log has little more
11:06 Gr6 [2013-12-07 11:04:10.681396] I [cli-rpc-ops.c:805:gf_cli_create_volume_cbk] 0-cli: Received resp to create volume
11:06 Gr6 [2013-12-07 11:04:10.681729] I [input.c:36:cli_batch] 0-: Exiting with: -1
11:06 Gr6 but not more than that
11:07 samppah are you using iptables or selinux?
11:07 Gr6 no
11:08 Gr6 vanilla ubuntu 13.10 server
11:09 samppah Gr6: what does gluster peer status say?
11:09 Gr6 gluster peer status
11:09 Gr6 Number of Peers: 1
11:09 Gr6 Hostname: gluster2
11:09 Gr6 Uuid: b0dc8d86-8297-428b-b9b9-9c5e9080f0f3
11:09 Gr6 State: Peer in Cluster (Connected)
11:10 samppah you are using GlusterFS03 as hostname in create command but peer status says it's peered with gluster2?
11:11 Gr6 that glusterfs03 was quote from the mailing list
11:11 Gr6 i use gluster volume create test03 replica 2 transport tcp gluster1:/foo/b gluster2:/foo/b
11:11 Gr6 if i try to run the same command second time i get
11:11 Gr6 volume create: test03: failed: /foo/b or a prefix of it is already part of a volume
11:11 glusterbot Gr6: To clear that error, follow the instructions at http://joejulian.name/blog/glusterfs-path-or​-a-prefix-of-it-is-already-part-of-a-volume/ or see this bug https://bugzilla.redhat.com/show_bug.cgi?id=877522
11:12 Gr6 which i can clear with: setfattr -x trusted.glusterfs.volume-id /foo/b
11:12 Gr6 then i can try creating again
11:12 samppah can you send output of gluster vol info and gluster vol status to pastie.org?
11:12 Gr6 gluster volume create test03 replica 2 transport tcp gluster1:/foo/b gluster2:/foo/b
11:12 Gr6 volume create: test03: failed
11:12 Gr6 gluster vol info
11:12 Gr6 No volumes present
11:12 samppah okay
11:12 Gr6 gluster vol status
11:12 Gr6 No volumes present
11:13 Gr6 should there be the management volume ?
11:13 samppah no.. not afaik
11:13 Gr6 /etc/glusterfs/glusterd.vol
11:13 Gr6 defines volume management
11:13 samppah it doesn't show in gluster vol status
11:13 Gr6 ok..
11:14 Gr6 well, are the ubuntu packages broken?
11:14 samppah what packages you are using?
11:14 samppah @ppa
11:14 glusterbot samppah: The official glusterfs packages for Ubuntu are available here: 3.3 stable: http://goo.gl/7ZTNY -- 3.4 stable: http://goo.gl/u33hy -- 3.5 QA: http://goo.gl/Odj95k
11:14 ndevos Gr6: just checking, but are those ,,(closed servers)?
11:14 glusterbot Gr6: I do not know about 'closed servers', but I do know about these similar topics: 'cloned servers'
11:14 ndevos @cloned servers
11:14 glusterbot ndevos: Check that your peers have different UUIDs ('gluster peer status' on both). The uuid is saved in /var/lib/glusterfs/glusterd.info - that file should not exist before starting glusterd the first time. It's a common issue when servers are cloned. You can delete the /var/lib/glusterfs/peers/<uuid> file and /var/lib/glusterfs/glusterd.info, restart glusterd and peer-probe again.
11:15 Gr6 ndevos, yes, same rack, run under kvm
11:15 Gr6 ii  glusterfs-client                    3.4.1-ubuntu1~saucy2             amd64        clustered file-system (client package)
11:15 Gr6 ii  glusterfs-common                    3.4.1-ubuntu1~saucy2             amd64        GlusterFS common libraries and translator modules
11:15 Gr6 ii  glusterfs-server                    3.4.1-ubuntu1~saucy2             amd64        clustered file-system (server package)
11:15 Gr6 ndevos, i created kvm vms in different machines manually, not cloning
11:16 ndevos Gr6: ah, okay - then I dont have any ideas
11:16 samppah me neither.. probably try to strace glusterd processes and gluster vol create to see if it gives any better error messages
11:16 Gr6 these are the ubuntu packages from gluster.org / deb http://ppa.launchpad.net/semios​is/ubuntu-glusterfs-3.4/ubuntu saucy main
11:16 glusterbot Title: Index of /semiosis/ubuntu-glusterfs-3.4/ubuntu (at ppa.launchpad.net)
11:17 samppah Gr6: what filesystem is /foo/b using? extended attributes enabled?
11:17 Gr6 ext4
11:17 ndevos you could run 'glusterd --log-level=DEBUG', sometimes that shows a little more useful messages
11:18 Gr6 i've tried creating a separate volume as well and a directory under / and subdirectory /foo/b
11:18 Gr6 but it doesnt work
11:20 Gr6 [2013-12-07 11:19:42.062779] D [glusterd-utils.c:4889:glust​erd_friend_find_by_hostname] 0-management: Unable to find friend: gluster1
11:20 Gr6 i found a typo in /etc/hosts
11:21 ndevos sounds like you found the issue :)
11:22 Gr6 thanks for the debug hint, i try to create the volume now
11:23 Gr6 volume create: datavolume: success: please start the volume to access data
11:23 Gr6 now it went through
11:23 samppah great :)
11:24 Gr6 stupid that you need to run glusterd in debug mode to find this
11:24 samppah (,,bug)
11:24 samppah gah.. i never remember how this bot works :)
11:24 samppah @bug
11:24 glusterbot samppah: (bug <bug_id> [<bug_ids>]) -- Reports the details of the bugs with the listed ids to this channel. Accepts bug aliases as well as numeric ids. Your list can be separated by spaces, commas, and the word "and" if you want.
11:25 samppah @bug report
11:25 glusterbot samppah: Bug https://bugzilla.redhat.com​:443/show_bug.cgi?id=771807 is not accessible.
11:25 samppah Gr6: oh well, please send a bug report :)
11:26 hchiramm_ joined #gluster
11:35 Gr6 what's the biggest environment you use gluster for ?
11:35 Gr6 does anyone here use it for kvm storage ?
11:35 Gr6 or vmware?
11:35 Gr6 or under zfs ?
11:37 samppah i'm using it with kvm and rhev
11:37 samppah currently in use for internal projects and testing.. hopefully we can start using it production with customer data soon
11:38 Gr6 you dont want to wait 3.5 ?
11:39 Gr6 you're using 3.4.1 ?
11:39 samppah 3.4.1.. not sure if we are going to upgrade to 3.5 anytime soon
11:40 Gr6 have you stresstested it with multiple over 10 processes writing at the same time?
11:41 Gr6 as i read 3.2.7 is stable for multiprocess writing, where 3.4.x isnt
11:41 samppah we are happy with 3.4.1.. of course if 3.5 has some necessary bug fixes then we have to upgrade sooner or later :)
11:41 Gr6 did you stresstest it?
11:41 samppah well yes.. we have done quite a lot testing in that manner.. do you have more information about that issue?
11:43 Gr6 wait a sec
11:43 Gr6 its in the mailing list
11:46 Gr6 http://www.gluster.org/pipermail/glu​ster-users/2013-December/038167.html
11:46 glusterbot Title: [Gluster-users] GlusterFS was removed from Fedora EPEL (at www.gluster.org)
11:47 Gr6 http://www.gluster.org/pipermail/glu​ster-users/2013-December/038106.html
11:47 glusterbot Title: [Gluster-users] GlusterFS was removed from Fedora EPEL (at www.gluster.org)
11:47 Gr6 Actually, I have very bad experience with GlusterFS 3.3.x and 3.4.x under
11:47 Gr6 very high pressure (> 64 processes write in parallel in more than 10
11:47 Gr6 minutes, for example). GlusterFS 3.2.7 from EPEL is really stable and we
11:47 Gr6 use it for production.
11:48 hchiramm_ joined #gluster
11:48 samppah hmm intresting
11:49 Gr6 also one bug report regarding port conflict https://bugzilla.redhat.com/show_bug.cgi?id=987555
11:49 glusterbot Bug 987555: medium, urgent, 3.4.2, ndevos, MODIFIED , Glusterfs ports conflict with qemu live migration
11:49 samppah yeah, that's very unforunate
11:49 samppah luckily there's a workaround in rhel/centos packages already
11:52 samppah it's possible to specify where used port range starts in /etc/glusterfs/glusterd.vol
11:52 samppah with: option base-port 50152
11:52 Gr6 yes
11:53 Gr6 but not with ubuntu pkgs
11:53 samppah :(
11:53 Gr6 3.4.2 isnt released yet
11:53 Gr6 I am considering releasing 3.4.2 in the second half of November. Please feel free to propose patches/bugs for inclusion in 3.4.2 here:
11:54 ricky-ticky joined #gluster
11:55 samppah semiosis: is it possible to include this in current ubuntu packages aswell?
12:14 hchiramm_ joined #gluster
12:37 rotbeard joined #gluster
13:23 dusmant joined #gluster
14:14 hybrid5121 joined #gluster
14:35 ricky-ticky joined #gluster
14:57 bennyturns joined #gluster
15:16 _BryanHm_ joined #gluster
15:17 davinder joined #gluster
15:18 sgowda joined #gluster
15:42 d-fence joined #gluster
15:45 sgowda joined #gluster
15:49 tqrst is it normal for glustershd to stay up after I've brought glusterd and gluterfsd down through their init scripts?
15:51 tqrst doesn't seem to be mentioned in the init scripts at all
15:52 ricky-ticky joined #gluster
15:58 hagarth joined #gluster
16:08 bennyturns joined #gluster
16:08 DV__ joined #gluster
16:08 ccha3 joined #gluster
16:08 keytab joined #gluster
16:08 wgao_ joined #gluster
16:08 basic` joined #gluster
16:08 compbio joined #gluster
16:31 rotbeard joined #gluster
17:05 hchiramm_ joined #gluster
17:21 davinder joined #gluster
17:42 sprachgenerator joined #gluster
18:16 _pol joined #gluster
19:09 rotbeard joined #gluster
19:33 alexp789_ joined #gluster
19:57 alexp789_ Hello all!  I just wondered if anyone on here has had problems relating to the error "transport.address-family" being reporting in the logs?  (re: http://www.gluster.org/pipermail/glus​ter-users/2013-December/038203.html)
19:57 glusterbot Title: [Gluster-users] replace-brick failing - transport.address-family not specified (at www.gluster.org)
20:09 johnbot11 joined #gluster
20:12 johnbot11 joined #gluster
20:15 gmcwhistler joined #gluster
20:17 MrNaviPacho joined #gluster
20:27 abyss^ alexp789_: I am not sure if it possible and it right way to use replace brick to move brick on the same server (change name of bricks).
20:28 alexp789_ Oh right, I didn't realise that was a limitation, so would it be correct to say, bricks always need to be moved to a remote host?
20:30 abyss^ alexp789_: In my opinion - yes. I had the same issue but during replace-brick (to the another server) my server crashed and gluster go insane;)
20:31 alexp789_ Uh ho, not what I wanted to hear ;)  Do you have a volume with the setting 'replica 2'?  If you had to replace a failing disk in this volume how would you do it?
20:31 abyss^ I can't abort or do anything with this, I added option transport.address-family to the config file of gluster ( I have 3.3.1) then turned off all my gluster server then turn on.
20:31 alexp789_ Oh did that setting fix it?
20:33 alexp789_ I've got http://pastebin.com/F4HH5EHt in my glusterd.vol file, but I've only restarted the service, not the whole host...
20:33 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
20:33 GabrieleV joined #gluster
20:34 abyss^ alexp789_: hmm no;) Gluster still behave very strange but I didn't get that error. To fix it I just delete any line about replace-brick and change some lines in config file of brick. Sorry, above didn't help I forgot;) (I spent on that about 8h and tried a lot of things ;))
20:36 abyss^ ps aux|grep gluster then found your brick and look which config file is using gluster for that brick, then go to /var/lib/gluster/$yournameofvolume/ and find this file
20:36 alexp789_ haha I know that feeling, I've even created scripts to create and delete volumes I've been fighting with this problem so long ;-)
20:38 abyss^ alexp789_: sorry if I write missunderstanding, my english is not as good as I would like ;)
20:39 abyss^ s/found/find/
20:39 glusterbot What abyss^ meant to say was: ps aux|grep gluster then find your brick and look which config file is using gluster for that brick, then go to /var/lib/gluster/$yournameofvolume/ and find this file
20:45 sarkis joined #gluster
21:12 mattapp__ joined #gluster
21:20 johnbot11 joined #gluster
21:42 skered- joined #gluster
21:50 mattappe_ joined #gluster
21:53 mattapp__ joined #gluster
21:54 alexp789_ Sorry, just got stuck on a phone call, thanks a lot abyss^ and glusterbot... So is it standard to just edit those config files?
21:55 alexp789_ If so after editing, whats the 'correct' way to load the settings, just stopping and starting the volume?
22:01 mattapp__ joined #gluster
22:06 mattapp__ joined #gluster
22:12 abyss^ alexp789_: is not a standard (since version 3.2) but it worked in my case. Maybe you can just abort replace-brick? I load setting by restarting gluster daemon. I compared the file from other gluster (from the same peer) for that brick and edited (remove section concerning replace-brick and modify some lines to make it work proper). Then I had to turn off all gluster servers then turned them again. But I had different case (server crashed during replace brick).
22:12 _pol joined #gluster
22:15 alexp789_ Cool, think I might need to try power cycling the environment, its just a pain as there are also OpenStack servers, so its slightly more complicated...
22:15 alexp789_ Thanks for all your help though, very much appreciated!
22:17 tempnanner joined #gluster
22:18 badone joined #gluster
22:18 tempnanner this might be a dumb question, but how can I restrict who/what can peer with servers.  I know auth.allow for volume access, but what stops anyone from peering with a server
22:20 samppah tempnanner: only a server that's in gluster can peer new servers
22:20 tempnanner I dont understand, when I just setup my first two nodes that were blank machines I was just able to peer them in
22:22 tempnanner * peer them with each other
22:22 tempnanner doh
22:22 tempnanner I think I understand
22:34 rotbeard joined #gluster
22:56 sarkis joined #gluster
23:18 mattappe_ joined #gluster
23:30 johnbot11 joined #gluster
23:31 mattapp__ joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary