Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-03-26

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:04 Slasheri joined #gluster
00:09 chirino_m joined #gluster
00:12 lijiejun joined #gluster
00:17 gdubreui joined #gluster
00:20 theron joined #gluster
00:32 sprachgenerator joined #gluster
00:34 harish joined #gluster
00:40 pjschmitt joined #gluster
00:46 mattappe_ joined #gluster
00:57 chirino joined #gluster
01:01 cjanbanan joined #gluster
01:02 robo joined #gluster
01:03 bala joined #gluster
01:19 vpshastry1 joined #gluster
01:27 robo joined #gluster
01:29 lijiejun joined #gluster
01:44 discretestates joined #gluster
01:44 tokik joined #gluster
01:44 harish joined #gluster
01:47 [o__o] left #gluster
01:50 [o__o] joined #gluster
01:54 Nicolas_22 joined #gluster
01:54 Nicolas_22 hi, someone restarted the 4 servers that had gluster fs on them and now the mouts are not working
01:55 Nicolas_22 there are 2 servers replicating each other
01:55 Nicolas_22 as servers, and then 2 clients
01:57 Nicolas_22 when I do gluster peer status
01:57 Nicolas_22 it says connected
01:58 Nicolas_22 any idea what to look into next?
02:00 Nicolas_22 can anyone help me with this?
02:03 Nicolas_22 hello? anybody out there?
02:04 Alex Nicolas_22: If someone's around, they'll generally pipe up and try to help out when they can. :)
02:07 Nicolas_22 :)
02:07 Nicolas_22 I have no clue what else to look for
02:08 nightwalk joined #gluster
02:10 glusterbot New news from newglusterbugs: [Bug 1031328] Gluster man pages are out of date. <https://bugzilla.redhat.co​m/show_bug.cgi?id=1031328>
02:11 mattappe_ joined #gluster
02:17 raghug joined #gluster
02:21 Nicolas_22 so when cd into the shared diretory I get " Transport endpoint is not connected"
02:22 thigdon Nicolas_22: typically means that the client can't contact the server
02:22 thigdon are you sure the server is running?
02:23 Nicolas_22 thigdon: yes it is running
02:23 thigdon anything interesting in the logs?
02:23 Nicolas_22 server logs?
02:24 thigdon or the brick logs?
02:24 thigdon or the client logs for that matter
02:24 mattappe_ joined #gluster
02:24 Nicolas_22 ok so when I'm on the server
02:24 harish joined #gluster
02:25 Nicolas_22 newwww1.mydomain.com:/rimagesvolume /r_images glusterfs defaults,_netdev,backupvolfil​e-server=newwww3.mydomain.com 0 0
02:25 Nicolas_22 is in /etc/fstab
02:25 Nicolas_22 which means /r_images needs to be mounted on the server itself too, so the server itself is also a client of itself
02:26 Nicolas_22 now that mount isn't working so I'm trying to manually mount it
02:27 Nicolas_22 mount -t glusterfs newwww1.mydomain.com:/rimagesvolume /r_images
02:27 Nicolas_22 by running this
02:27 thigdon does the server log have anything interesting?
02:27 Nicolas_22 yeah
02:27 Nicolas_22 bunch of stuff I'll pastebin
02:27 thigdon keep in mind, i'm just a user like you. i'm no expert.
02:28 Nicolas_22 http://pastebin.com/ZPnTfNZu
02:28 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
02:28 Nicolas_22 http://fpaste.org/88635/39580090/
02:28 glusterbot Title: #88635 Fedora Project Pastebin (at fpaste.org)
02:29 Nicolas_22 thigdon: that's cool I appreciate the help
02:29 thigdon pastebin doesn't seem to be resolving for me right now
02:30 Nicolas_22 failed to get the port number for remote subvolume
02:31 Nicolas_22 the brick logs are all empty
02:31 thigdon i believe that's the client log you showed me
02:31 thigdon although i'm not sure
02:32 thigdon perhaps the server is not running?
02:33 Nicolas_22 well see the same linux box is both client and server
02:33 Nicolas_22 of itself
02:34 Nicolas_22 is it the /var/log/glustershd.log that you wanna see?
02:34 tokik_ joined #gluster
02:35 thigdon i don't know what the default log name is, unfortunately
02:36 thigdon it could be /var/log/glusterd.log
02:36 thigdon it is not /var/log/glustershd.log
02:36 thigdon that is the self-heal daemon, which is something different
02:37 Nicolas_22 ok when I do service --status-all I see that
02:37 Nicolas_22 glusterd (pid  1161) is running..., glusterfsd is stopped
02:37 Nicolas_22 are they both supposed to be running?
02:37 thigdon glusterfsd is probably supposed to be running
02:39 wrale joined #gluster
02:41 Nicolas_22 so I tried sudo service glusterfsd start
02:41 Nicolas_22 but it's still stopped with no output
02:44 cjanbanan joined #gluster
03:11 bharata-rao joined #gluster
03:25 mattapperson joined #gluster
03:27 dusmant joined #gluster
03:29 raghug joined #gluster
03:30 shubhendu joined #gluster
03:33 RameshN joined #gluster
03:33 kanagaraj joined #gluster
03:38 thigdon left #gluster
03:40 itisravi joined #gluster
03:47 haomaiwang joined #gluster
04:04 sahina joined #gluster
04:08 pk joined #gluster
04:18 sks joined #gluster
04:19 seapasulli joined #gluster
04:26 deepakcs joined #gluster
04:39 atinm joined #gluster
04:40 mohankumar joined #gluster
04:41 kdhananjay joined #gluster
04:42 shylesh joined #gluster
04:45 lalatenduM joined #gluster
04:51 dusmant joined #gluster
04:56 aravindavk joined #gluster
05:05 cjanbanan joined #gluster
05:06 hagarth joined #gluster
05:07 ravindran1 joined #gluster
05:11 bala joined #gluster
05:12 ndarshan joined #gluster
05:12 prasanth_ joined #gluster
05:16 vpshastry1 joined #gluster
05:18 ppai joined #gluster
05:21 raghug joined #gluster
05:24 AaronGr joined #gluster
05:26 kdhananjay joined #gluster
05:27 nightwalk joined #gluster
05:32 chirino_m joined #gluster
05:34 meghanam joined #gluster
05:34 meghanam_ joined #gluster
05:36 coredump_ joined #gluster
05:43 rjoseph joined #gluster
05:54 benjamin_____ joined #gluster
06:00 raghug joined #gluster
06:05 vimal joined #gluster
06:10 glusterbot New news from newglusterbugs: [Bug 1074947] add option to bulld rpm without server <https://bugzilla.redhat.co​m/show_bug.cgi?id=1074947>
06:12 spandit joined #gluster
06:15 raghu joined #gluster
06:19 sticky_afk joined #gluster
06:19 dusmant joined #gluster
06:19 stickyboy joined #gluster
06:24 Philambdo joined #gluster
06:24 ricky-ticky1 joined #gluster
06:26 rahulcs joined #gluster
06:32 nshaikh joined #gluster
06:37 psharma joined #gluster
06:50 rjoseph joined #gluster
06:57 vpshastry2 joined #gluster
07:03 kdhananjay joined #gluster
07:15 cjanbanan joined #gluster
07:19 ngoswami joined #gluster
07:25 rgustafs joined #gluster
07:26 jtux joined #gluster
07:33 ekuric joined #gluster
07:48 tokik joined #gluster
07:55 rjoseph joined #gluster
07:56 dusmant joined #gluster
08:02 eseyman joined #gluster
08:03 ctria joined #gluster
08:07 hagarth joined #gluster
08:09 andreask joined #gluster
08:17 slayer192 joined #gluster
08:20 nightwalk joined #gluster
08:21 edward1 joined #gluster
08:25 keytab joined #gluster
08:27 ngoswami joined #gluster
08:33 cjanbanan joined #gluster
08:36 fsimonce joined #gluster
08:40 tshefi joined #gluster
08:40 muhh joined #gluster
08:43 jtux joined #gluster
08:48 jtux joined #gluster
08:55 dusmant joined #gluster
08:57 slayer192 joined #gluster
09:08 liquidat joined #gluster
09:12 jbustos joined #gluster
09:35 bala joined #gluster
09:37 XATRIX joined #gluster
09:37 XATRIX Hi , can anyone help me with gluster ?
09:39 XATRIX I mean, i have runtime configuration, but no config files for volumes
09:39 XATRIX Actually only this one : http://ur1.ca/gx4qc
09:39 glusterbot Title: #88685 Fedora Project Pastebin (at ur1.ca)
09:39 XATRIX How can i tune io performance, caches and etc... ?
09:39 XATRIX I mean i need to create a config file for volumes as i understand ?
09:50 sahina joined #gluster
09:51 hagarth joined #gluster
09:51 kanagaraj joined #gluster
10:00 Slash joined #gluster
10:04 ngoswami joined #gluster
10:15 tryggvil joined #gluster
10:27 jmarley joined #gluster
10:28 kanagaraj joined #gluster
10:29 sahina joined #gluster
10:32 shyam joined #gluster
10:33 bala joined #gluster
10:33 lijiejun_ joined #gluster
10:35 hagarth joined #gluster
10:39 ndevos XATRIX: you can find some tunables with the commend 'gluster volume set help'
10:40 ndevos XATRIX: changing these is done through 'gluster volume set $VOLUME $OPTION $NEW_VALUE' - dont edit config files by hand
10:40 dusmant joined #gluster
10:40 ndevos XATRIX: resetting the options to their default 'gluster volume reset $VOLUME' or gluster volume reset $VOLUME $OPTION'
10:45 theron joined #gluster
10:48 calum_ joined #gluster
10:49 ctria joined #gluster
10:49 giannello joined #gluster
10:53 sahina joined #gluster
10:54 kanagaraj joined #gluster
10:54 XATRIX ndevos: and how can i configure io-cache or how can i rise network performance a bit ?
10:54 XATRIX http://ur1.ca/gx5be
10:54 glusterbot Title: #88704 Fedora Project Pastebin (at ur1.ca)
10:55 nixpanic joined #gluster
10:56 nixpanic joined #gluster
10:58 kdhananjay joined #gluster
10:58 ndevos XATRIX: 'gluster volume set help' should list a io-cache option, you can use that
11:00 ndevos XATRIX: note that you should do a lot of testing on the workload that you want to improve the performance of - create a test-matrix, record all your settings and results
11:03 hagarth joined #gluster
11:03 tokik joined #gluster
11:05 XATRIX ndevos: yea, i will. just for the notice, i have a 2node cluster with shared storage within. 2xHDDs on one node are in MDRAID1
11:06 XATRIX the same setup is on the other node
11:08 harish_ joined #gluster
11:08 chirino joined #gluster
11:11 diegows joined #gluster
11:11 glusterbot New news from newglusterbugs: [Bug 1080946] AFR-V2 : Self-heal-daemon not completely self-healing all the files <https://bugzilla.redhat.co​m/show_bug.cgi?id=1080946>
11:12 ravindran1 left #gluster
11:23 dusmant joined #gluster
11:23 kanagaraj joined #gluster
11:24 muhh left #gluster
11:27 aravindavk joined #gluster
11:28 saurabh joined #gluster
11:36 nightwalk joined #gluster
11:42 nshaikh joined #gluster
11:46 lijiejun_ joined #gluster
11:46 nishanth joined #gluster
11:48 B21956 joined #gluster
11:51 hagarth joined #gluster
11:53 aravindavk joined #gluster
11:55 ngoswami joined #gluster
11:56 monotek i just had a strange problem with some disconnected / overloaded nodes after adding icinga checks. is ist possible that to much usage of "gluster peer status" / "gluster volume status" locks the servers?
11:59 itisravi joined #gluster
12:04 tryggvil joined #gluster
12:07 shyam joined #gluster
12:08 lijiejun joined #gluster
12:10 xavih_ joined #gluster
12:12 glusterbot New news from newglusterbugs: [Bug 1080970] SMB:samba and ctdb hook scripts are not present in corresponding location after installation of 3.0 rpm's <https://bugzilla.redhat.co​m/show_bug.cgi?id=1080970>
12:16 ppai joined #gluster
12:17 kdhananjay joined #gluster
12:23 kanagaraj joined #gluster
12:24 ajha joined #gluster
12:29 kkeithley1 joined #gluster
12:35 sahina joined #gluster
12:40 sks joined #gluster
12:40 rjoseph joined #gluster
12:46 Pavid7 joined #gluster
12:48 DerekT joined #gluster
12:49 sroy_ joined #gluster
12:50 DerekT left #gluster
12:50 japuzzo joined #gluster
12:50 social Is there way to print out order of xlators currently used on volume?
12:52 ppai joined #gluster
12:53 kanagaraj joined #gluster
12:53 mohankumar joined #gluster
12:59 benjamin_____ joined #gluster
13:00 robos joined #gluster
13:03 bet_ joined #gluster
13:05 kanagaraj joined #gluster
13:08 JPeezy joined #gluster
13:10 kanagaraj joined #gluster
13:12 glusterbot New news from newglusterbugs: [Bug 1080988] AFR V2 : kernel untar on fuse/nfs mount fails when a brick goes offline <https://bugzilla.redhat.co​m/show_bug.cgi?id=1080988>
13:19 primechuck joined #gluster
13:19 jobewan joined #gluster
13:23 ppai joined #gluster
13:28 robos joined #gluster
13:29 jtux joined #gluster
13:42 glusterbot New news from newglusterbugs: [Bug 1081013] glusterd needs xfsprogs and e2fsprogs packages <https://bugzilla.redhat.co​m/show_bug.cgi?id=1081013> || [Bug 1081016] glusterd needs xfsprogs and e2fsprogs packages <https://bugzilla.redhat.co​m/show_bug.cgi?id=1081016> || [Bug 1081018] glusterd needs xfsprogs and e2fsprogs packages <https://bugzilla.redhat.co​m/show_bug.cgi?id=1081018>
13:46 shubhendu joined #gluster
13:47 hagarth joined #gluster
13:47 nishanth joined #gluster
13:48 jmarley joined #gluster
13:48 jmarley joined #gluster
13:48 dusmant joined #gluster
13:49 tryggvil joined #gluster
13:49 pk left #gluster
13:52 kanagaraj joined #gluster
13:54 rpowell joined #gluster
13:55 chirino joined #gluster
14:03 xymox joined #gluster
14:04 kaptk2 joined #gluster
14:08 tryggvil joined #gluster
14:10 LoudNoises joined #gluster
14:11 xymox joined #gluster
14:13 primechu_ joined #gluster
14:13 fraggeln I have a small issue
14:13 fraggeln I do: gluster volume create cinder replica 2 node03:/export/cinder node02:/export/cinder
14:13 fraggeln volume create: cinder: failed
14:14 fraggeln but the folder /export/cinder is ceated, but nothing more happens
14:14 fraggeln the api-log doesnt tell me that much tbh
14:14 fraggeln any idea on how to investigate further?
14:15 coredump joined #gluster
14:17 bene joined #gluster
14:26 tdasilva joined #gluster
14:27 wushudoin joined #gluster
14:34 robos joined #gluster
14:38 monotek i just had a strange problem with some disconnected / overloaded nodes after adding icinga checks. is ist possible that to much usage of "gluster peer status" / "gluster volume status" locks the servers?
14:38 wushudoin left #gluster
14:39 nightwalk joined #gluster
14:39 lijiejun joined #gluster
14:42 ctria joined #gluster
14:46 theron joined #gluster
14:48 theron joined #gluster
14:49 lijiejun joined #gluster
14:51 lalatenduM fraggeln, check the gluster logs in /var/log/glusterfs
14:52 lalatenduM monotek, did not get your question
14:52 fraggeln lalatenduM: it tells me noting of intrests
14:52 magicrobotmonkey joined #gluster
14:52 fraggeln 15:44:53 < fraggeln> [2014-03-26 14:43:33.589906] I [cli-rpc-ops.c:805:gf_cli_create_volume_cbk] 0-cli: Received resp to
14:52 fraggeln create volume
14:52 fraggeln 15:44:54 < fraggeln> [2014-03-26 14:43:33.590013] W [cli-rl.c:106:cli_rl_process_line] 0-glusterfs: failed to process line
14:53 fraggeln not much info on why there mate
14:53 lalatenduM fraggeln, which os, dstribution u r using
14:53 warci joined #gluster
14:53 lalatenduM fraggeln, I never seen sothing like this before
14:54 monotek lalatenduM - i just want to know, if "excessive" usage of "gluster volume status" locks the server / creates load?
14:54 zaitcev joined #gluster
14:54 diegows joined #gluster
14:55 lalatenduM monotek, not sure, however if something is already wrong then volume status might longer than usual
14:55 fraggeln lalatenduM: centos 6.5
14:55 lalatenduM fraggeln, which version of gluster u r using?:
14:55 fraggeln and it creates the dirs on the node
14:55 warci jclift , lalatenduM thanks for the info yesterday... had to leave in a hurry so no time to say thx ;)
14:55 lalatenduM fraggeln, what is the status of selinux and iptables
14:56 jclift :)
14:56 monotek volume status was not working anymore on ale nodes for hours.... had to restart als glusterd processes...
14:56 lalatenduM warci, no prob :)
14:56 fraggeln llpremissive and off
14:56 lalatenduM fraggeln, and "gluster peer info"
14:57 fraggeln all connected
14:57 fraggeln glusterfs-server-3.4.2-1.el6.x86_64
14:57 lalatenduM fraggeln, selinux and iptables are in same state in all nodes?
14:58 fraggeln yes
14:58 lalatenduM fraggeln, /export is separate partition?
14:58 lmickh joined #gluster
14:59 fraggeln lalatenduM: yea, separate xfs
14:59 bene2 joined #gluster
15:00 fraggeln Im on the train home, so my connection goes up and down :/
15:00 lalatenduM fraggeln, both node3 and node2 are dns resolvable from both hosts
15:00 fraggeln yea
15:00 jclift *** GLUSTER COMMUNITY MEETING TIME in #gluster-meeting ***
15:00 lalatenduM fraggeln, try creating volume using IPs,
15:01 fraggeln lalatenduM: the fun part is, I have node7 and node8 as well, and they are already running a volume (replicated)
15:01 lalatenduM fraggeln, using different dirname this time e.g.\ /export/cinder-b1
15:01 jdarcy joined #gluster
15:01 lalatenduM fraggeln, there is no reason why it does not work :(
15:02 fraggeln lalatenduM: I have tried that
15:02 fraggeln dirs will be created, but then nothing
15:02 fraggeln Volume Name: glance
15:02 fraggeln Type: Replicate
15:02 fraggeln Volume ID: 1c3a9c81-b00d-4dc0-8242-96f61d4de5bb
15:02 fraggeln Status: Started
15:02 fraggeln Number of Bricks: 1 x 2 = 2
15:02 fraggeln Transport-type: tcp
15:02 fraggeln Bricks:
15:02 fraggeln Brick1: node07:/export/glance
15:02 fraggeln Brick2: node08:/export/glance
15:02 tryggvil joined #gluster
15:02 fraggeln ahh fuck
15:02 fraggeln sorry
15:02 fraggeln wrong window
15:06 semiosis @later tell lpabon please ping me re: a jenkins account when you have a chance. thanks!
15:06 glusterbot semiosis: The operation succeeded.
15:07 daMaestro joined #gluster
15:07 warci if somebody watching now, here is our nfs issue: http://pastebin.com/9yQiidey
15:07 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
15:07 warci i'll post it to the mailinglist as well
15:08 chirino joined #gluster
15:11 tryggvil joined #gluster
15:15 warci so basically i get this error: GETATTR: NFS: 2(No such file or directory), POSIX: 14(Bad address)
15:15 seapasulli joined #gluster
15:15 warci i'm seeing some stuff on google that relates to this error, but nothing conclusive
15:16 warci anyone any ideas?
15:17 lalatenduM warci, as of now weekly community meeting is going on at #gluster-meeting, so responses might be slow
15:17 warci hehe no problem :)
15:17 warci i'll keep looking for a solution as well
15:19 benjamin_____ joined #gluster
15:23 ctria joined #gluster
15:25 JPeezy joined #gluster
15:30 lijiejun joined #gluster
15:39 dbruhn joined #gluster
15:40 rpowell left #gluster
15:42 sprachgenerator joined #gluster
15:43 Lethalman joined #gluster
15:43 Lethalman hi, would gluster be suitable just as nfs export? but without the nfs protocol, right gluster fuse
15:43 Lethalman I'm having troubles with nfs stale file handles, and I'd like to get rid of nfs for this reason
15:45 ndk joined #gluster
15:46 ccha semiosis: hello, why you didn't add brick log for logrorate ?
15:46 vpshastry1 joined #gluster
15:50 raghug joined #gluster
15:51 bazzles joined #gluster
15:54 lijiejun joined #gluster
15:54 theron_ joined #gluster
15:55 discretestates joined #gluster
16:00 vpshastry1 left #gluster
16:03 semiosis ccha: no reason.  i should add that
16:05 lalatenduM hchiramm_, ping
16:07 coredump_ joined #gluster
16:09 fraggeln lalatenduM: I have solved it
16:10 fraggeln lalatenduM: I think its possible that its related to the ammount of nodes in the cluster
16:10 fraggeln I did add 8 nodes, but I only tried to make the volume on 2 of the nodes.
16:10 fraggeln when I removed all excessive nodes it worked like a charm creating that node.
16:10 fraggeln s/node/volume
16:11 sks joined #gluster
16:11 bazzles joined #gluster
16:11 bene2 joined #gluster
16:20 Matthaeus joined #gluster
16:21 Mo__ joined #gluster
16:22 vpshastry1 joined #gluster
16:22 tryggvil joined #gluster
16:30 sputnik13 joined #gluster
16:36 zerick joined #gluster
16:43 cjanbanan joined #gluster
16:45 daMaestro joined #gluster
16:47 dusmant joined #gluster
16:48 rjoseph joined #gluster
16:52 fraggeln is it possible when I have a 8 node cluster that it will behave strange when trying to add a volume just to 2 nodes?
16:56 kanagaraj joined #gluster
16:57 semiosis anything is possible, but it would be unusual
16:59 fraggeln I have 6 nodes at the moment
16:59 fraggeln 2 of the nodes have volumeX, 2 of them have volumeY
17:00 fraggeln and Im trying to add 2 new bricks to volumeX with 2 new nodes
17:00 fraggeln but it fails
17:01 lijiejun joined #gluster
17:04 lalatenduM fraggeln, that should be possible, using 2 nodes out of 8,
17:04 semiosis give a pastie with 1. command you executed, 2. its output, and 3. the glusterd log file from the server where you executed the command
17:05 hagarth joined #gluster
17:07 rjoseph left #gluster
17:10 davinder joined #gluster
17:12 theron joined #gluster
17:14 davinder2 joined #gluster
17:14 chirino_m joined #gluster
17:16 cfeller joined #gluster
17:18 robos joined #gluster
17:30 robos joined #gluster
17:32 sputnik13 is anyone using gluster with openstack havana?
17:36 shyam joined #gluster
17:39 lalatenduM sputnik13, I have some mails in gluster-users abt it
17:43 B21956 joined #gluster
17:50 elico left #gluster
17:55 raghug joined #gluster
17:56 sroy__ joined #gluster
17:58 robos joined #gluster
17:59 magicrobotmonkey left #gluster
17:59 ctria joined #gluster
18:00 ron-slc joined #gluster
18:01 jmarley joined #gluster
18:01 jmarley joined #gluster
18:12 jdarcy joined #gluster
18:13 jdarcy left #gluster
18:13 jdarcy joined #gluster
18:14 chirino joined #gluster
18:20 lijiejun joined #gluster
18:26 nightwalk joined #gluster
18:31 awheeler_ joined #gluster
18:35 lpabon joined #gluster
18:36 lijiejun joined #gluster
18:43 glusterbot New news from newglusterbugs: [Bug 1058526] tar keeps reporting "file changed as we read it" on random files <https://bugzilla.redhat.co​m/show_bug.cgi?id=1058526>
18:50 theron joined #gluster
18:54 JoeJulian sputnik13: yes
18:57 larsks sputnik13: yes (as a cinder backend)
18:57 lijiejun joined #gluster
18:57 elico joined #gluster
19:00 bet_ joined #gluster
19:01 robos joined #gluster
19:03 sputnik13 so...  I figured out that nova wasn't trying to use libgfapi because I didn't have qemu_allowed_storage_drivers=gluster set
19:03 sputnik13 now that I have it set thusly, now the error I get is that the volume can't be found
19:03 sputnik13 grr
19:04 criticalhammer joined #gluster
19:10 Pavid7 joined #gluster
19:13 glusterbot New news from newglusterbugs: [Bug 1075611] [FEAT] log: enhance gluster log format with message ID and standardize errno reporting <https://bugzilla.redhat.co​m/show_bug.cgi?id=1075611>
19:15 robos joined #gluster
19:22 lijiejun joined #gluster
19:28 JoeJulian sputnik13: Do you have your server:/volume set in /etc/cinder/shares.conf?
19:30 vpshastry1 joined #gluster
19:30 sputnik13 JoeJulian: yup, I think I see what the problem might be actually
19:30 sputnik13 doh
19:30 sputnik13 my compute nodes were using glusterfs-client 3.2.x
19:31 sputnik13 which is kind of weird because I added the 3.4 ppa before I installed glusterfs-client
19:31 sputnik13 maybe I didn't do an aptitude update first so it went to the default ubuntu repo
19:32 sputnik13 nice, it's now working
19:32 sputnik13 :-D
19:34 sputnik13 is it possible to use libgfapi with nova for instance image storage as well?
19:35 JoeJulian sputnik13: Not that I've found.
19:35 sputnik13 nuts
19:35 JoeJulian yeah, I'm not sure why either. I suppose the code just isn't there.
19:36 sputnik13 so I have to mount glusterfs to /var/lib/nova
19:36 sputnik13 yeah, I guess so
19:36 sputnik13 probably the nova guys need to catch up
19:36 JoeJulian I asked about that on the blog of the guy that wrote this bit and he said it's not supported.
19:36 lalatenduM sputnik13, check this out https://access.redhat.com/site/documentation/e​n-US/Red_Hat_Storage/2.1/html/Configuring_Red_​Hat_OpenStack_with_Red_Hat_Storage/index.html
19:36 glusterbot Title: Configuring Red Hat OpenStack with Red Hat Storage (at access.redhat.com)
19:37 sputnik13 I mean it's certainly not a deficiency on the part of libvirt or kvm right, otherwise boot from cinder volume with libgfapi wouldn't work
19:38 sputnik13 lalatenduM: thanks for the link, from what I can see it asks that we use fuse for compute...  actually those instructions don't seem to cover getting libgfapi working with cinder either
19:39 lalatenduM sputnik13, libgfapi is still not ready I guess, and thats reason RH is not supporting it
19:39 vpshastry1 left #gluster
19:40 sputnik13 lalatenduM: I don't know that that's necessarily true
19:40 sputnik13 regardless, thank you for the pointer :)
19:41 semiosis lalatenduM: RHS might not have glusterfs 3.4 yet.  libgfapi was released with 3.4
19:41 lalatenduM sputnik13,  I am not sure either , I need to figure that out and make sure we have some documentation around it. May be johnmark will help us
19:42 lalatenduM semiosis, RHS2.1 is 3.4
19:42 kkeithley_mtg Cinder doesn't use gfapi yet. Next release
19:42 semiosis ah
19:43 sputnik13 true, cinder doesn't use gfapi directly, but it does communicate specific information about the use of gluster, so nova still works with libgfapi when mounting cinder volumes
19:43 sputnik13 at least that's what I'm observing right now
19:44 JoeJulian right
19:45 lalatenduM sputnik13, can I ask you a favor :), once you are done with your setup, plz write a blog and send it to gluster-users, even if you dont get success , plz send it to gluster-users.
19:46 JoeJulian lalatenduM: RH doesn't recommend a lot of things unless they have a specific user for which paying someone to do a bunch of qa testing is cost effective, at least that's the way it looks from my perspective.
19:46 sputnik13 lalatenduM: http://blog.flaper87.com/pos​t/520b7ff00f06d37b7a766dc0/
19:46 glusterbot Title: Using libgfapi to access Glusterfs Volumes in Nova (at blog.flaper87.com)
19:46 sputnik13 someone else did the work already :)
19:46 JoeJulian yeah, that's the one... :D
19:47 sputnik13 everything looks the same as setting up gluster the normal way
19:47 lalatenduM JoeJulian, I am not sure :),
19:47 lalatenduM sputnik13, cool
19:47 sputnik13 the difference is the nova.conf setting
19:48 lalatenduM JoeJulian, do u know if this blog is syndicated at gluster.org blogs johnmark ^^
19:48 sputnik13 JoeJulian: I would agree with the assessment about RH at least from an outsider's perspective, though I would tend to say they're justified for doing that
19:49 sputnik13 I mean they *are* a business, they can't throw out statements about "supporting" something unless they have good business reasons to do so
19:49 JoeJulian Yeah, I'm not poking sticks, just observing.
19:49 sputnik13 as there are associated costs with statements like that
19:49 sputnik13 understood
19:50 JoeJulian I know they don't recommend a lot of things that GlusterFS is currently used for.
19:50 lijiejun joined #gluster
19:51 JoeJulian Other than that one article, it doesn't look like Flavio Percoco blogs about Gluster.
19:52 lalatenduM sputnik13, JoeJulian , I am not sure if my position allows me speaking abt RH perspective :), so I will be rather silent
19:52 JoeJulian Hehe
19:52 JoeJulian We'll talk at summit... :D
19:53 sputnik13 huh?  do you work for RH lalatenduM? :)
19:53 lalatenduM sputnik13, yes :)
19:53 sputnik13 hahah, cool
19:53 dbruhn is the replace-brick command still in 3.3.2?
19:53 lalatenduM JoeJulian, yeah thats the right place
19:53 kkeithley_mtg Honestly, as near as I can tell Red Hat doesn't make any recommendations about what you can use _Community_ GlusterFS for. ;-)
19:53 JoeJulian Depends on how you define "work".
19:53 lalatenduM JoeJulian, true :)
19:54 kkeithley_mtg We do make recommendations about what you can use RHS for.
19:54 JoeJulian dbruhn: I should hope so... They're not supposed to change the commands you've already scripted for in released versions.
19:54 lalatenduM kkeithley_mtg, the question is how RH decides the use case
19:55 sputnik13 wonderful, the iozone numbers on my VM look way better than on the host
19:55 lalatenduM I mean use cases
19:55 sputnik13 I have an 8 node gluster setup in distribute+replicate with 2 replica
19:55 sputnik13 all on 10gig links
19:55 lalatenduM dbruhn, yeah it should be there
19:55 dbruhn Just checking before I start digging into this. JoeJulian if I have a brick that's weird can I just format the filesystem on it and have the selfheal refill it? or is there something else I need to do?
19:55 sputnik13 on a host using fuse mount I get about 500MB/s
19:56 sputnik13 on a VM I'm seeing 10x that for some numbers
19:56 sputnik13 which is a bit weird but whatever, my users will not complain ;)
19:56 LoudNoises sputnik13: we have a very similar setup and see about 7GB/s
19:57 JoeJulian dbruhn: You'll have to re-add the volume id xattr...
19:57 sputnik13 LoudNoises: what tuning params for gluster and test params for iozone are you using, if you don't mind me asking?
19:57 kkeithley_mtg lalatenduM: that's easy. Product Management spends lots of time listening to (potential) customers about what they want in a storage product, then they narrow down the popular use cases to those that we can qualify and support. We'd love to do _everything_ but we don't have the resources to do everything.
19:57 lalatenduM LoudNoises, awesome , on 10gig
19:57 LoudNoises yea bonded on each node
19:57 LoudNoises we're seeing that on real file performance, not iozone
19:58 lalatenduM kkeithley_mtg, yup :)
19:59 lalatenduM LoudNoises, so you have multiple 10gig bonded together?
19:59 LoudNoises yea 2 per server and 2 per client
19:59 dbruhn Is there a better way to go about replacing a brick?
19:59 LoudNoises our use case is also multithreaded, which we've found will make a big difference
20:00 LoudNoises some of the benchmarking tools aren't
20:00 JoeJulian dbruhn: Not in my opinion.
20:01 JoeJulian dbruhn: What I do, though, is add a replacement drive add it to the volume group, then pvmove the brick logical volumes to the new drive.
20:01 criticalhammer LoudNoises: 10gig copper or fiber?
20:01 LoudNoises copper
20:01 lalatenduM LoudNoises, for replacing bricks, do 1. add brick 2. remove brick start 3. wait for remove brick to complete 4. remove brick commit
20:02 criticalhammer thats good news
20:02 criticalhammer im planning on doing 10gig copper
20:02 criticalhammer how's your latency?
20:02 criticalhammer LoudNoises:
20:02 LoudNoises lalatenduM: JoeJulian would be in a much better position to answer that, we've only had to do it a few times and I didn't have to
20:02 lalatenduM LoudNoises, this would work with active I/O, make sure after setp 1, you donot perform rebalance
20:02 dbruhn what I have is a xfs file system on raid that is behaving strangely and keep getting space errors even though it has plenty of space, keeps causing all sorts of split-brain issues. I don't have an extra "drive" to throw in an migrate it.
20:02 lalatenduM LoudNoises, yup, agree , JoeJulian is the right guy
20:03 LoudNoises lalatenduM: oh sorry, i thought you were asking me :)
20:03 robos joined #gluster
20:04 LoudNoises criticalhammer: we don't really have latency concerns as most of our use case is sequential reads, but under high load the latency can get a bit high cause of how we have our raidcards tuned
20:04 JoeJulian dbruhn: Then, yeah. I would kill that one brick's glusterfsd, getfattr to get the volume id then format and set the volume id and start the brick again (start volume $vol force).
20:04 lalatenduM LoudNoises, np :), I know for sure that JoeJulian has more experience of using gluster in production , so he is the right person for this
20:05 JoeJulian hehe
20:05 dbruhn the volume id is an extended attribute on the root directory used for the brick right?
20:05 JoeJulian right
20:05 dbruhn and I don't need to worry about the .glusterfs directory at all?
20:05 JoeJulian not at all.
20:05 dbruhn awesome, hopefully this will make my life a bit easier going forward.
20:06 JoeJulian My process is only valid, of course, for replicated bricks.
20:06 dbruhn yep, it is
20:06 dbruhn 12 x 2
20:07 dbruhn do I need to worry about trusted.glusterfs.dh and trusted.gfid?
20:07 JoeJulian nope. Just trusted.glusterfs.volume-id
20:11 lijiejun joined #gluster
20:14 shyam left #gluster
20:20 robos joined #gluster
20:23 elico1 joined #gluster
20:23 elico1 left #gluster
20:25 criticalhammer left #gluster
20:27 robos joined #gluster
20:36 MacWinner joined #gluster
20:55 lpabon joined #gluster
21:00 robos joined #gluster
21:04 sputnik13 is it possible to set owner and group for a mount when it's mounted without changing the underlying owner/group?
21:04 sputnik13 trying to mount gluster volume across multiple compute nodes on openstack, and some of them have different uid and gid for nova
21:04 tdasilva left #gluster
21:08 criticalhammer1 joined #gluster
21:15 semiosis sputnik13: afaik, not possible
21:20 andreask joined #gluster
21:30 JoeJulian sputnik13: You should really keep that consistent. Are you using puppet or some other configuration management system?
21:30 sputnik13 JoeJulian: using puppet
21:31 JoeJulian Then you should manage the user and group there (easy to say now that they're already inconsistent).
21:31 jbustos joined #gluster
21:31 sputnik13 heh
21:33 JoeJulian I'm actually kind-of surprised that ubuntu doesn't keep that consistent like the EL based distros do.
21:34 sputnik13 I wonder if even EL keeps it consistent or whether it's by chance?
21:35 sputnik13 normally you don't specify a specific uid or gid in a package
21:37 JoeJulian preinstall scriptlet (using /bin/sh):
21:37 JoeJulian getent group nova >/dev/null || groupadd -r nova --gid 162
21:37 JoeJulian if ! getent passwd nova >/dev/null; then
21:37 JoeJulian useradd -u 162 -r -g nova -G nova,nobody -d /var/lib/nova -s /sbin/nologin -c "OpenStack Nova Daemons" nova
21:37 JoeJulian fi
21:37 JoeJulian exit 0
21:37 JoeJulian gah, that's not what I told it to copy...
21:37 sputnik13 :)
21:37 JoeJulian but yeah, that's the preinstall script in the rpm.
21:37 Matthaeus What I've seen (albeit on debian-based systems) is that daemon UIDs take the next available one after 100.  User accounts take the next available one after 1000.  System accounts start at 0.  If stuff is installed in the same order, then the uids will match across systems.
21:37 sputnik13 ic
21:38 sputnik13 Matthaeus: yeah, that's consistent with what I know, but somehow things weren't installed in the same order
21:38 sputnik13 probably because puppet doesn't guarantee ordering
21:38 sputnik13 :(
21:38 JoeJulian So, what I think your saying is that you agree that .rpm > .deb
21:38 sputnik13 uhh, no
21:39 sputnik13 :)
21:39 JoeJulian "User <<| |>> -> Package <<| |>>" <- ensure your users are created before your packages are installed.
21:39 sputnik13 that's a quantum leap to go from uid being fixed to rpm > deb :-P
21:40 mattappe_ joined #gluster
21:40 sputnik13 JoeJulian: I don't think the puppet scripts were creating the users, I think it's expected that the packages set up the users
21:40 sputnik13 at this point though, if I add the user via puppet and specify a uid/gid, I'm probably going to end up breaking everything
21:40 * sputnik13 sighs
21:41 JoeJulian test it, but I'm sure if the nova user exists prior to the package being installed, it will use that user.
21:41 sputnik13 I have a big maintenance planned for this friday, I might try to fix it on Friday
21:42 sputnik13 JoeJulian: agreed, but I thought we were talking about using the User resource from puppet to set a specific and consistent uid/gid across all nodes, but if I do that now all of the already installed nodes will have their nova user/group uid/gid changed from underneath them
21:42 JoeJulian Ah, right... that part kind-of sucks, I agree.
21:43 sputnik13 I'm going to have to try that during the maintenance
21:43 sputnik13 luckily it's an all day outage :-)
21:43 JoeJulian The best I could suggest is to find a uid that's not used, use find to change the current user to the new uid, then change the user to the new uid.
21:44 sputnik13 JoeJulian: good idea
21:44 Matthaeus sputnik13: backups, sir.  Nothing sucks more than having munged the perms on a production filesystem and only figuring it out after the window closes.
21:50 nightwalk joined #gluster
21:50 JoeJulian One other way of solving it... since you've puppetized everything, just slaughter your cattle and grow new ones.
21:51 JoeJulian ie. reinstall.
21:51 sputnik13 JoeJulian: yeah, thought of that...  not an option because the VMs need to come back up :)
21:57 cjanbanan joined #gluster
21:58 glustercjb joined #gluster
21:59 mattappe_ joined #gluster
21:59 glustercjb anyone here have experience with geo-replication on 3.5 (beta)
21:59 glustercjb I have some failure scenario questions
22:02 tryggvil joined #gluster
22:06 JoeJulian I think georeplication users represent maybe generously 10% of the user base. The beta version I suspect you'd be lucky to find 1% of those testing it. That puts the likelihood around 0.1% of 188ish users. Granted, I suspect a higher than average percent of testers are probably here or in the mailing list, but I still think you're not likely to find any actual information about that yet.
22:07 JoeJulian But if you ask questions, we generally try to find answers.
22:07 semiosis where'd you get 188ish?
22:07 glustercjb haha, nice
22:07 JoeJulian 191 (at the time I looked) - bots.
22:07 elico joined #gluster
22:07 JoeJulian Just in this channel, of course...
22:07 glustercjb right
22:08 glustercjb I'd probably be better off sending a message to the list (dev)
22:08 semiosis not the dev list
22:08 JoeJulian Not unless you're asking questions relating to how you're going to patch or further develop the product.
22:09 glustercjb might anyone here know why the mountbroker (non-root user ssh) geo-replication was removed from 3.5?
22:16 JoeJulian It's still in the source.
22:19 coredump joined #gluster
22:20 glustercjb xlators/mgmt/glusterd/src/glusterd-geo-rep.c:                                        "Non-root username (%s@%s) not allowed.",
22:22 fidevo joined #gluster
22:27 JoeJulian bug 998933
22:27 glusterbot Bug https://bugzilla.redhat.com​:443/show_bug.cgi?id=998933 high, medium, ---, asengupt, MODIFIED , Geo-rep mount-broker going faulty because of IOError on slave log file.
22:31 elico joined #gluster
22:31 glustercjb thinking
22:32 JoeJulian me too
22:32 glustercjb so, from waht I've seen, it seems as though geo-repl is not done
22:33 glustercjb reason I say this is because of the gverify.sh script that is run when you do a "geo-replication create"
22:33 glustercjb that doesn't take any alternative ssh private keys into consideration
22:33 glustercjb so, in my case it was failing, always
22:34 glustercjb even though the default is to put the key under /var/lib/glusterd/geo-replication/secret.pem
22:34 glustercjb (this is all for the most recent build of 3.5, FWIW)
22:35 glustercjb in addition, there's a script in there that specifys "gsyncd" as the only command that can be run on the peer
22:36 glustercjb which is the way I had mine set up, but again, that gsync script does crazy stuff like echo blah first, then checks the space of the target cluster, etc
22:36 glustercjb all those would fail if you can only run gsyncd
22:36 JoeJulian Right.
22:37 JoeJulian It doesn't look like this is accurate then: https://github.com/gluster/glusterfs/blob/rele​ase-3.5/doc/admin-guide/en-US/markdown/admin_g​eo-replication.md#using-mountbroker-for-slaves
22:37 glusterbot Title: glusterfs/doc/admin-guide/en-US/m​arkdown/admin_geo-replication.md at release-3.5 · gluster/glusterfs · GitHub (at github.com)
22:37 JoeJulian Would you agree?
22:37 glustercjb and yes, from 3.4, the mount-broker was a little bit sensitive, to say the least
22:38 glustercjb checking what you posted
22:38 theron joined #gluster
22:39 elico joined #gluster
22:40 glustercjb yea, that won't work without root access on the target
22:41 JoeJulian Could you file a bug please. Include what you're trying to accomplish along with the configuration settings you've been using that no longer work. I'll see if I can duplicate it tomorrow.
22:41 glusterbot https://bugzilla.redhat.com/en​ter_bug.cgi?product=GlusterFS
22:42 glustercjb no problem I can file it tonight
22:42 glustercjb on my way out, but I'll hop back on tomorrow and we can discuss further
22:43 glustercjb thanks for the help
22:44 JoeJulian You're welcome.
22:44 glustercjb left #gluster
22:46 elico joined #gluster
22:48 elico joined #gluster
22:50 cjanbanan joined #gluster
22:51 elico joined #gluster
22:58 theron joined #gluster
23:01 cjanbanan joined #gluster
23:02 sprachgenerator joined #gluster
23:08 elico joined #gluster
23:10 theron_ joined #gluster
23:12 elico joined #gluster
23:26 elico joined #gluster
23:28 elico joined #gluster
23:29 elico joined #gluster
23:31 elico joined #gluster
23:35 elico joined #gluster
23:36 elico joined #gluster
23:39 elico joined #gluster
23:40 gdubreui joined #gluster
23:41 theron joined #gluster
23:52 elico joined #gluster
23:57 elico joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary