Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-02-17

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:05 hagarth joined #gluster
00:17 neurodrone__ joined #gluster
00:30 badone_ joined #gluster
00:46 ctria joined #gluster
00:53 badone__ joined #gluster
00:56 ninkotech joined #gluster
00:57 pixelgremlins_ba joined #gluster
00:58 bala joined #gluster
01:02 badone joined #gluster
01:03 jag3773 joined #gluster
01:35 jag3773 joined #gluster
01:42 kevein joined #gluster
02:00 tokik joined #gluster
02:22 kevein joined #gluster
03:01 nightwalk joined #gluster
03:05 tokik_ joined #gluster
03:16 harish joined #gluster
03:26 shubhendu joined #gluster
03:28 RameshN joined #gluster
03:33 haomaiwang joined #gluster
03:35 haomai___ joined #gluster
03:50 bharata-rao joined #gluster
04:03 mohankumar__ joined #gluster
04:10 jag3773 joined #gluster
04:10 dusmant joined #gluster
04:15 satheesh joined #gluster
04:18 mohankumar__ joined #gluster
04:19 ndarshan joined #gluster
04:20 CheRi joined #gluster
04:27 satheesh joined #gluster
04:29 zapotah joined #gluster
04:30 aravindavk joined #gluster
04:31 shylesh joined #gluster
04:31 sahina joined #gluster
04:34 neurodrone__ joined #gluster
04:44 meghanam joined #gluster
04:50 dusmant joined #gluster
04:50 neurodrone__ joined #gluster
04:58 itisravi joined #gluster
05:01 saurabh joined #gluster
05:01 rastar joined #gluster
05:03 ppai joined #gluster
05:07 hagarth joined #gluster
05:16 zapotah joined #gluster
05:18 ajha joined #gluster
05:22 prasanth joined #gluster
05:31 neurodrone__ joined #gluster
05:32 tokik joined #gluster
05:33 bala joined #gluster
05:40 bala joined #gluster
05:48 davinder joined #gluster
05:51 rjoseph joined #gluster
05:54 nshaikh joined #gluster
05:56 sahina joined #gluster
05:57 dusmant joined #gluster
06:00 hagarth joined #gluster
06:05 jag3773 joined #gluster
06:12 raghu joined #gluster
06:14 Philambdo joined #gluster
06:16 eastz0r joined #gluster
06:17 pk1 joined #gluster
06:29 raghug joined #gluster
06:37 ktosiek_ joined #gluster
06:37 jporterfield joined #gluster
06:41 mohankumar__ joined #gluster
06:51 benjamin_____ joined #gluster
06:53 vimal joined #gluster
06:55 spandit joined #gluster
06:57 hagarth joined #gluster
06:57 pixelgremlins joined #gluster
07:25 hagarth joined #gluster
07:27 jtux joined #gluster
07:27 raghug joined #gluster
07:30 ProT-0-TypE joined #gluster
07:41 ekuric joined #gluster
07:54 raghug joined #gluster
07:59 eseyman joined #gluster
08:05 jporterfield joined #gluster
08:05 ctria joined #gluster
08:08 kanagaraj joined #gluster
08:20 kevein_ joined #gluster
08:26 keytab joined #gluster
08:29 kevein__ joined #gluster
08:34 mohankumar__ joined #gluster
08:34 Humble joined #gluster
08:40 andreask joined #gluster
08:40 andreask joined #gluster
08:40 kanagaraj_ joined #gluster
08:41 kanagaraj joined #gluster
08:42 surabhi joined #gluster
08:48 tjikkun joined #gluster
08:48 tjikkun joined #gluster
08:49 glusterbot New news from resolvedglusterbugs: [Bug 1065623] "Gluster volume status" command doesn't return to prompt if peer netwok is down <https://bugzilla.redhat.co​m/show_bug.cgi?id=1065623>
08:55 lalatenduM joined #gluster
08:56 liquidat joined #gluster
09:01 haomaiwang joined #gluster
09:05 moo34 joined #gluster
09:06 moo34 hi
09:06 glusterbot moo34: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
09:06 moo34 is the performance.enable-O_SYNC available/applicable?
09:06 dkorzhevin joined #gluster
09:20 bharata-rao joined #gluster
09:23 mbukatov joined #gluster
09:29 bazzles joined #gluster
09:32 rjoseph1 joined #gluster
09:35 rjoseph joined #gluster
09:37 moo34 I'm getting this: sudo gluster volume set fsqvol performance.enable-O_SYNC onvolume set: failed: option : performance.enable-O_SYNC does not exist Did you mean performance.cache-size?
09:41 hagarth moo34: why are you trying to enable that option?
09:41 moo34 yes. But I think I found when/why it was removed: http://review.gluster.org/#/c/3947/9
09:41 glusterbot Title: Gerrit Code Review (at review.gluster.org)
09:44 kanagaraj joined #gluster
09:46 nullck joined #gluster
09:49 badone joined #gluster
09:50 Frankl joined #gluster
09:51 kanagaraj joined #gluster
10:09 abyss^ joined #gluster
10:11 meghanam joined #gluster
10:11 meghanam_ joined #gluster
10:15 qdk joined #gluster
10:16 Frankl Hi, anyone could take a look at my issue: I failed to mount a glusterfs volume using fuse client while nfs is ok
10:17 Frankl when using fuse way, the mounting is hanging
10:17 jporterfield joined #gluster
10:24 bharata-rao joined #gluster
10:25 hagarth Frankl: you might want to check your fuse client log files for hints
10:26 Frankl yes. I checked that.
10:26 Frankl After callback setvolume, there is nothing more
10:26 Frankl e.g
10:26 Frankl [2014-02-17 18:09:16.330235] I [client-handshake.c:1636:sele​ct_server_supported_programs] 0-sh-mams-client-26: Using Program GlusterFS 3.3.0.5rhs_iqiyi_6, Num (1298437), Version (330)
10:26 Frankl [2014-02-17 18:09:16.330729] I [client-handshake.c:1433:client_setvolume_cbk] 0-sh-mams-client-26: Connected to 10.121.56.141:24011, attached to remote volume '/mnt/xfsd/sh-mams'.
10:26 Frankl [2014-02-17 18:09:16.330757] I [client-handshake.c:1445:client_setvolume_cbk] 0-sh-mams-client-26: Server and Client lk-version numbers are not same, reopening the fds
10:26 glusterbot Frankl: This is normal behavior and can safely be ignored.
10:26 Frankl [2014-02-17 18:09:16.331069] I [client-handshake.c:453:client_set_lk_version_cbk] 0-sh-mams-client-26: Server lk version = 1
10:27 wica joined #gluster
10:28 wica Hi, is there a translator to split files, so that they are better distributed over the volume
10:28 stickyboy I'm getting quite a bit of this in my log file:
10:28 stickyboy 0-management: connection attempt failed (Connection refused)
10:29 spiekey joined #gluster
10:29 spiekey Hello!
10:29 glusterbot spiekey: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
10:29 spiekey why is node2 different to node3? http://fpaste.org/77815/26329721/
10:29 glusterbot Title: #77815 Fedora Project Pastebin (at fpaste.org)
10:30 spiekey info split-brain shows me 0 Results, so i guess i am fine and i do not have a split brain
10:31 spiekey are those open/locked files? in my paste?
10:33 mohankumar__ joined #gluster
10:36 ppai joined #gluster
10:48 jporterfield joined #gluster
10:50 keytab joined #gluster
10:56 al joined #gluster
11:02 Nev___ joined #gluster
11:02 Nev___ how to fix a not working "ls" on my distributed volume?
11:03 Nev___ i can list, 2 levels deep in the filesystem but, when i list the 3rd it is just stuck
11:03 Nev___ on the bricks , everything is fine
11:07 franc joined #gluster
11:10 hagarth spiekey: it is usually the combination of output from both nodes that determines your self heal delta
11:10 hagarth spiekey: if files are actively being written to, then you can see false positives in the o/p of self-heal info
11:11 hagarth Frankl: do you have any firewall rules on your clients that prevents connections from being established to servers?
11:13 Frankl hagarth: no filewall, I have tried many client machines and I disable the selinux, iptables, all hang
11:13 hagarth stickyboy: looks similar to https://bugzilla.redhat.com/show_bug.cgi?id=977497
11:13 glusterbot Bug 977497: unspecified, high, 3.4.0, kparthas, NEW , gluster spamming with E [socket.c:2788:socket_connect] 0-management: connection attempt failed (Connection refused) when nfs daemon is off
11:15 hagarth Frankl: can you please send out an email with more details on the users list? providing a pointer to a client log file in loglevel DEBUG will help.
11:15 Frankl suure
11:18 hagarth Nev___: do you have any pending self-heals? the client log file would show such details
11:18 Nev___ no the log files are fine
11:18 Nev___ there is nothing
11:19 Nev___ i must mention, that we are stuck with 3.0.7
11:19 Nev___ and its a distributed, not replicated filesystem
11:21 spiekey hagarth: but this looks clean, doesn't it? http://fpaste.org/77822/92636095/
11:21 glusterbot Title: #77822 Fedora Project Pastebin (at fpaste.org)
11:26 Nev___ hm, if i ls on the file directly, i get a listing, but if i ls the folder , there is 0 files in it
11:26 Nev___ wirred
11:27 ccha2 hello, I can't kill -9 glusterfsd
11:27 ccha2 I don't want to reboot
11:27 ccha2 how can I kill it ?
11:27 ccha2 root     32597  8.9  0.0      0     0 ?        Zsl  Jan29 2376:58 [glusterfsd] <defunct>
11:28 diegows joined #gluster
11:32 hagarth spiekey: right
11:33 DV joined #gluster
11:34 hagarth Nev___: 3.0.7 is quite ancient - might be useful to check if the client is able to see all bricks (and I hope you are not using ext4 for the bricks)
11:36 Nev___ @hagarth, no we use XFS its running since 3 years now without any problems
11:36 hagarth Nev___: ok
11:36 ira joined #gluster
11:37 Nev___ we made an network update the last weekend, only added a backbone switch, for that we unmounted everything
11:37 Nev___ rebooted the machines
11:37 Nev___ and started everything
11:37 Nev___ now we discovered that we can see only 2-3 level deep in the filesystem
11:37 Nev___ on the brick itself we can dig all levels down
11:38 rossi_ joined #gluster
11:41 Nev___ every client is reporting the correct file size for the brick, so that should be ok.
11:42 burn420 joined #gluster
11:42 rjoseph joined #gluster
11:45 Frankl hagarth: One of nodes is the root cause, the node showed many messages in console: BUG: soft lockup CPU#? stuck for x seconds
11:58 Nev___ how  can we trigger a rehash of the existing files, in meaning rebuilding xattrs of each file
11:59 hagarth joined #gluster
12:03 edward1 joined #gluster
12:05 ccha2 hum
12:05 ccha2 tcp        0      0 0.0.0.0:24009           0.0.0.0:*               LISTEN      -
12:06 ccha2 how can drop this listen port ?
12:07 ccha2 lsof | grep 24009 return nothing
12:08 itisravi_ joined #gluster
12:09 OaaSvc joined #gluster
12:10 kkeithley joined #gluster
12:14 jikz joined #gluster
12:23 pk1 joined #gluster
12:25 bfoster joined #gluster
12:29 abyss^ if I have Distributed-Replicate with 4 bricks and each have ~ 500GB then I add another gluster and add brick from this gluster to this volume but disk on that glusters will have  200GB that should just expand size of volume in 200GB, yes? I don't have to add brick with the same size, yes?
12:32 nightwalk joined #gluster
12:33 spandit joined #gluster
12:34 spandit joined #gluster
12:39 kkeithley abyss^: it'll work, but not very well. DHT (distribute) really expects all bricks to be the same size. When the 200GB bricks fill up, performance will suffer.
12:40 45PAAGFQE joined #gluster
12:41 Nev___ is there a way to reset the xattr ? and reinventory them?
12:42 Nev___ or is there some caching going on, which i can delete?
12:42 Nev___ because this partial file listing is driving me mad
12:47 kanagaraj_ joined #gluster
12:54 kanagaraj joined #gluster
12:57 nshaikh joined #gluster
12:58 haomai___ joined #gluster
13:06 pk1 joined #gluster
13:08 kanagaraj joined #gluster
13:15 portante joined #gluster
13:21 tqrst joined #gluster
13:23 dusmant joined #gluster
13:27 tqrst came in to work this morning to find 3 of my bricks (in a 2x distribute replicate volume, oddly enough) at 100% and the rest at their usual 50-65%.
13:27 tqrst this should be a fun morning
13:30 ccha2 I have a replication vol and on 1 server I got Number of Bricks: 1 x 2 = 2
13:30 ccha2 but on the other one I got this Number of Bricks: 0 x 3 = 2
13:31 ccha2 I don't understand the "Number of Bricks: 0 x 3 = 2"
13:31 tqrst I was going to say  "at least it got the math right", but then you pasted your second line
13:32 neurodrone__ joined #gluster
13:33 pk1 joined #gluster
13:34 pk1 left #gluster
13:41 Slash joined #gluster
13:44 fsimonce joined #gluster
13:47 shylesh joined #gluster
13:51 abyss^ kkeithley: oh, I didn't know about that:/
13:51 davinder joined #gluster
13:53 abyss^ kkeithley: can I read about that anywhere? I'd like to know what disadventages there will be
13:54 prasanth joined #gluster
13:56 harish joined #gluster
13:56 kkeithley Maybe the usual places? docs on gluster.org? I don't know specifically where it's documented.  What's going to happen is the DHT hash will try to place content on  the full 200GB brick, which will silently fail. Then it'll write it on a brick with space available plus place a link on the 200GB brick.  Later on access will read the link to find where the content really lives.
13:57 kkeithley All in all, not the ideal situation.
13:59 ccha2 ok there was a bug
14:00 ccha2 how can I know on which version a patch had been added ?
14:01 ccha2 for example about this patch http://review.gluster.org/#/c/5893/
14:01 glusterbot Title: Gerrit Code Review (at review.gluster.org)
14:01 prasanth joined #gluster
14:01 Nev___ someone got an idea, why an find command should stop listing the diretorys and files ? on a brick?
14:03 neurodrone__ joined #gluster
14:03 Nev___ we are running a find <glusterfs-mountpoint> --maxdepth=1 , is ok, maxdepth=2 is ok, but then maxdepth=3 ist stopped in the middle
14:04 zapotah joined #gluster
14:04 zapotah joined #gluster
14:04 kkeithley ccha2: it doesn't appear to be in the release-3.4 branch. It's in master and release-3.5
14:04 Nev___ so something, ist broken, but what could it be ?
14:07 moo34 joined #gluster
14:07 moo34 I'm getting EINVAL when writing to a file using O_DIRECT. I'm on Ubuntu 13.10 (kernel 3.11) - I thought Fuse supports O_DIRECT now?
14:09 abyss^ kkeithley: OK. Enough explanation for me. Thank you.
14:09 ctria joined #gluster
14:16 sroy_ joined #gluster
14:18 benjamin_____ joined #gluster
14:19 diegows joined #gluster
14:21 aixsyd joined #gluster
14:21 aixsyd morning gents
14:22 dbruhn joined #gluster
14:22 japuzzo joined #gluster
14:23 tqrst moin
14:23 aixsyd dbruhn! I'm so happy!
14:23 dbruhn lol good to hear
14:23 dbruhn why?
14:23 aixsyd I got my clusters in production - and had my first failure
14:23 aixsyd if youll remember, i'm running ZFS underneath glusterfs
14:23 dbruhn Well I'm not sure that would make me happy
14:23 dbruhn oh yeah
14:24 aixsyd and node 2 of 2 saturday night started telling me one of my drives had corrupt data
14:24 aixsyd (as i'm at home)
14:24 aixsyd so i rebooted it, maybe it was a fluke - after an hour, the server never came back online
14:24 mohankumar__ joined #gluster
14:25 ccha2 kkeithley: how did you know on which version , there is any hint or a webpage ?
14:25 aixsyd come in today, it was my raid card giving me a warning about the drive. replaced it, it booted up. ZFS started resilver right away, and gluster did its heal automatically - all good, and never had any VM downtime over the weekend ^_^
14:25 dbruhn nice!
14:25 aixsyd aka - YAY!
14:26 aixsyd gluster haled over the IB ports as it should have
14:26 aixsyd *healed
14:26 aixsyd after so much trial and error, i'm glad to see my first production splat went so smoothly
14:27 dbruhn That's the benefits of testing
14:27 dbruhn This is for you
14:27 dbruhn http://www.youtube.com/watch?v=xemLz_fR1Ac
14:27 glusterbot Title: Andrew W.K. - Its Time To Party - Official Music Video - YouTube (at www.youtube.com)
14:27 aixsyd I actually was doing the Andrew WK headbang this morning - so yep!
14:29 aixsyd also, my rack is starting to look a bit more presentable.
14:29 aixsyd from this retardedness: https://i.imgur.com/bDyDpO3.jpg
14:30 aixsyd to this: https://i.imgur.com/uBEKa7c.jpg
14:30 rwheeler joined #gluster
14:30 kkeithley ccha2: I checked out the source and looked for the change set in the git log. It doesn't show up in the release-3.4 branch
14:31 dbruhn aixsyd, that was a lot of desktop boxes in a server rack... lol
14:33 dbruhn I apparently don't have a picture of anything even close to what my couple racks look like right now.
14:37 bennyturns joined #gluster
14:37 ctria joined #gluster
14:39 P0w3r3d joined #gluster
14:40 hchiramm_ joined #gluster
14:40 aixsyd dbruhn: yep. desktops. there still one desktop in there - its not even in use, its just there to create the right airflow for the AC XD
14:40 aurigus joined #gluster
14:41 Nev___ are there any other attributes stored on the files an a brick?
14:41 Nev___ except the xattr in directories?
14:42 Nev___ which could cause a non listing?
14:42 Nev___ with ls
14:42 dbruhn aixsyd, pick up some cheap blank panels, they look cleaner and will accomplish the same thing.
14:42 aixsyd yeah, but thats money.
14:42 aixsyd xD
14:43 aixsyd a free desktop does it for free.
14:43 aixsyd (this is my supervisor talking)
14:43 benjamin_____ joined #gluster
14:43 dbruhn I see that free server has a light on the front eating electricity ;)
14:43 raghu` joined #gluster
14:44 aixsyd yeah, it auto-powered on - its since turned off.
14:44 aixsyd and unplugged.
14:44 aixsyd oh no no
14:44 aixsyd that dell one? thats legit, and in use.
14:44 aixsyd the one below it is a desktop
14:45 dbruhn ahh
14:45 aixsyd the dell is a 5u backup server. its techinically rack mount - but i dont have the mounting kit. it came with feet to stand as a desktop or remove the feet for rack use.
14:45 aixsyd its super huge for its purpose, but....
14:46 dbruhn Will if you get a bug up your rear to pull that other one out. You can get blank panels off ebay for 2-5$ per 1U for the quick snap ones.
14:46 dbruhn If you are worried about front to back air flow, it will help some as that server looks like it has some big air gaps
14:46 ira joined #gluster
14:46 aixsyd dbruhn: that doesnt address the sides, though - right? cause the air flows from right to left, over the top
14:47 mohankumar__ joined #gluster
14:48 dbruhn Most rack gear intakes on the front and exhausts out the rear, there is prosumer gear that will do side to side venting but it's considered a no no, a lot of high end data centers won't let you put that kind of stuff in a rack without special scoops for intake from the front that goes to the side, and then plenum that exhausts the side out the back
14:49 ^^rcaskey dbruhn, so do you buy an extra U and htey sandwitch you between some scoops?
14:49 ^^rcaskey (althoguh if you have wierdo stuff I assume you are likely buying by the rack)
14:50 dbruhn I've never had anything but full racks in my DC's
14:50 dbruhn And most places doing 1/4 or 1/2 racks still have physically separate spaces with different doors for access
14:51 aixsyd dbruhn: this is a Liebert MCR and the cold air outputs clearly on the left, and intakes on the right. And FWIW - Liebert isnt prosumer...
14:51 aixsyd :P
14:52 aixsyd ¯\_(ツ)_/¯
14:52 cjanbanan joined #gluster
14:52 dbruhn Oh damn, I didn't realize your rack had an integrated cooling system
14:53 dbruhn weird
14:53 aixsyd yup - internal, self-contained AC unit
14:53 aixsyd takes up like... 8-10u on the bottom
14:54 aixsyd whole rack with working AC - $200. retail - $8,000. #fyeah
14:54 dbruhn So where does it put the cold air in the rack?
14:54 bugs_ joined #gluster
14:54 dbruhn Left to right seems dangerous
14:54 aixsyd it circulates it from the bottom up and over.
14:55 aixsyd from one side to the other
14:56 aixsyd theres constant airflow and air movement, hot air rises to the top of the rack, where it gets sucked into the right side intake stream
14:59 aixsyd dbruhn: http://www.emersonnetworkpower.com/do​cuments/en-us/products/racksandintegr​atedcabinets/documents/sl-15602.pdf  - this explains it perfectly
14:59 dbruhn Are you monitoring the temperature of your equipment?
14:59 aixsyd Yepper
14:59 aixsyd its a cool 75-80F in the rack.
15:00 aixsyd which is nominal, according to the MCR admin guide
15:02 dbruhn It's non ideal, but better than some of the stuff I've walked into for sure.
15:03 tdasilva joined #gluster
15:07 nshaikh joined #gluster
15:09 rpowell joined #gluster
15:12 kkeithley ,,(split-brain)
15:12 glusterbot To heal split-brain in 3.3+, see http://joejulian.name/blog/fixin​g-split-brain-with-glusterfs-33/ .
15:16 primechuck joined #gluster
15:16 Slash joined #gluster
15:19 REdOG Nev___: ext4?
15:21 mohankumar__ joined #gluster
15:23 Nev___ ext3
15:23 Nev___ centos 5.10
15:26 tqrst ,,(ls-showing-same-file-twice)
15:26 glusterbot tqrst: Error: No factoid matches that key.
15:26 kaptk2 joined #gluster
15:29 rfortier1 joined #gluster
15:30 ndevos tqrst: sounds like a split-brain, probably the inode# is different for each file, "ls -li" would show that
15:34 lpabon joined #gluster
15:36 olisch joined #gluster
15:37 eastz0r joined #gluster
15:37 psyl0n joined #gluster
15:40 Derek joined #gluster
15:44 aixsyd dbruhn: what do you use for monitoring? like Nagios
15:44 wcchandler joined #gluster
15:44 wcchandler left #gluster
15:45 dbruhn I've used Nagios and Cacti forever, I am looking at moving to Zabbix
15:45 aixsyd why zabbix?
15:45 rfortier joined #gluster
15:46 dbruhn I've read good things and figured I would give it a try for a while. The agents are nice remove a lot of need for configuration on the client side.
15:46 Nev___ try check_mk
15:47 Nev___ for nagios
15:47 dbruhn Also when managing multiple data centers you can proxy from one zabbix server up to a master
15:47 Nev___ @dbrun so check out - check_mk wato
15:48 dbruhn looking at it right now
15:48 Nev___ if you want it more easy
15:48 Nev___ just try the omd
15:48 Nev___ http://omdistro.org/
15:48 glusterbot Title: Open Monitoring Distribution - start (at omdistro.org)
15:49 Nev___ icinga, check_mk pnp4nagios + nagvis
15:49 Nev___ all running in one command
15:49 ^^rcaskey does georeplication work in any sane way with volumes being used to store running qemu machines?
15:49 wcchandler joined #gluster
15:49 rotbeard joined #gluster
15:50 dbruhn Is OMD in reaction to the Nagios people taking ownership of some of the community domains?
15:50 jag3773 joined #gluster
15:51 Nev___ omd is around for a long time
15:51 Nev___ its made by the authors of check_mk
15:51 Nev___ than that is pretty awesome
15:51 Nev___ that ..
15:51 nikk if anyone has time to check this out it would be really awesome :) -- https://bugzilla.redhat.co​m/show_bug.cgi?id=1065551
15:51 glusterbot Bug 1065551: medium, unspecified, ---, kparthas, NEW , Unable to add bricks to replicated volume
15:55 japuzzo joined #gluster
15:56 plarsen joined #gluster
15:58 ctria joined #gluster
16:04 Nev___ tune2fs -O ^dir_index /dev/whatever
16:05 Nev___ did the job...
16:05 Nev___ whatever it does
16:14 semiosis cjanbanan: you should use quorum to avoid split-brain in your scenario
16:15 vpshastry joined #gluster
16:16 semiosis bug 1065705
16:16 glusterbot Bug https://bugzilla.redhat.com:​443/show_bug.cgi?id=1065705 unspecified, unspecified, ---, vraman, NEW , ls & stat give inconsistent results after file deletion by another client
16:17 semiosis kkeithley: hagarth: any thoughts on that bug? ^^
16:17 semiosis i suspect a performance xlator is the reason, but i dont know which one
16:22 jobewan joined #gluster
16:24 hagarth semiosis: you can try disabling stat-prefetch (md-cache actually) and also mounting with attribute-timeout=0 for the fuse client
16:24 semiosis hagarth: thanks!  will try later today
16:28 lmickh joined #gluster
16:28 andreask joined #gluster
16:30 lmickh I have a distrubuted volume with a single brick in it.  Is there a way to convert it to a replicated volume by adding a second brick?
16:31 lmickh Without shutting down the volume and recreating it.
16:33 semiosis lmickh: should be 'gluster volume add-brick <volname> replica 2 <new brick>' iirc
16:33 semiosis see ,,(rtfm)
16:33 glusterbot Read the fairly-adequate manual at http://gluster.org/community/doc​umentation//index.php/Main_Page
16:34 ndevos Nev___: "tune2fs -O ^dir_index /dev/whatever" disables dir-indexing, for ext3/4 it results in 32-bit hashes for files (dir_index makes them 64-bit)
16:34 DV joined #gluster
16:34 ndevos Nev___: if you need to disable dir_index on your filesystem, you probably run into the ,,(ext4) problem
16:34 glusterbot Nev___: The ext4 bug has been fixed in 3.3.2 and 3.4.0. Read about the ext4 problem at http://goo.gl/Jytba or follow the bug report here http://goo.gl/CO1VZ
16:40 mohankumar__ joined #gluster
16:47 psyl0n joined #gluster
16:53 aquagreen joined #gluster
16:54 Mo__ joined #gluster
16:58 zerick joined #gluster
16:59 acalvo joined #gluster
16:59 acalvo hello
16:59 glusterbot acalvo: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
16:59 acalvo is there any way to manually delete a geo-replication job?
17:00 acalvo or to list all of them that are running?
17:04 acalvo joined #gluster
17:09 micu joined #gluster
17:18 acalvo anyone know how to manually delete a geo-replication job?
17:19 mohankumar__ joined #gluster
17:20 JoeJulian not really.
17:21 JoeJulian The way I would figure that out would be to create a test volume (ie testvol), look at the contents of /var/lib/glusterd/vols/testvol, start georep, look at the contents again and see what changed. Then reverse that and see if it works.
17:21 JoeJulian acalvo: ^
17:21 portante joined #gluster
17:23 acalvo JoeJulian, thanks, I'd like into it
17:25 nikk JoeJulian: heya
17:25 nikk was wondering if you had a chance to look over https://bugzilla.redhat.co​m/show_bug.cgi?id=1065551
17:25 glusterbot Bug 1065551: medium, unspecified, ---, kparthas, NEW , Unable to add bricks to replicated volume
17:26 JoeJulian Nope, I've had a wife and daughter with a flu so I've been afk all weekend. Let me get a few $dayjob things rolling this morning and I'll get back in to it.
17:27 nikk hehe no problem
17:27 * JoeJulian hasn't even had coffee yet...
17:27 nikk coffee is for the weak
17:28 JoeJulian Coffee sets the tone of my entire day...
17:28 nikk i never liked it but i gave up loading myself up with tea a while ago
17:28 nikk feel better overall
17:29 JoeJulian I live in the Seattle area. You can't swing a dead cat without hitting an espresso stand.
17:31 nikk yeah true
17:31 nikk i have a friend from there, probably the biggest coffee snob i know :]
17:31 JoeJulian lol... speaking of habits... I just looked at your domain name. :D
17:31 kkeithley dead cat? What happens if you swing a live cat?
17:31 nikk haha
17:31 dbruhn You hit a red bull cooler
17:32 nikk you don't live long
17:32 JoeJulian kkeithley: Bunch of liberals around here... you'd probably end up in jail...
17:36 acalvo JoeJulian, all geo-replication data is stored in the info file of a volume
17:36 acalvo however, gsyncd processes don't die with glusterd init script
17:36 acalvo should be manually stopped
17:36 acalvo (in case it helps in the future)
17:39 JoeJulian acalvo: Thanks, I'll add that to my "where the hell did I read that" memory bank... ;)
17:39 JoeJulian Luckily the channel's logged and searchable.
17:41 REdOG is there a standard procedure to create a replica from existing data? Ive been testing and have been creating the replica volume then populating in the data which seems to take quite awhile.
17:42 JoeJulian There's no "supported" way, but in practice creating a volume where the left-hand brick has the existing data works.
17:43 REdOG k tks, ill give it a go in my next round
17:44 kkeithley More liberals than Massachusetts? Improbable.
17:49 * semiosis started yesterday with a redeye (drip w/ espresso shot) and chased that with a cold brew concentrate
17:51 rossi_ joined #gluster
17:52 dbruhn I'm 24 OZ of Sugar Free Redbull in this morning and ready for a cold press. I actually don't drink caffeine on the weekends.
17:55 primechuck I don't want to live in a world where 24OZ isn't a serving caffeinated beverage.
17:57 quique joined #gluster
17:57 semiosis @ppa
17:57 glusterbot semiosis: The official glusterfs packages for Ubuntu are available here: 3.3 stable: http://goo.gl/7ZTNY -- 3.4 stable: http://goo.gl/u33hy -- 3.5 QA: http://goo.gl/Odj95k
17:57 semiosis @forget ppa
17:57 glusterbot semiosis: The operation succeeded.
17:58 semiosis @learn ppa as The official glusterfs packages for Ubuntu are available here: 3.4 stable: http://goo.gl/u33hy -- 3.5 QA: http://goo.gl/Odj95k -- introducing QEMU with GlusterFS 3.4 support: http://goo.gl/7I8WN4
17:58 glusterbot semiosis: The operation succeeded.
17:59 quique can someone tell me how I would change the ssl cert file locations from this: http://lists.gnu.org/archive/html/g​luster-devel/2013-05/msg00139.html
17:59 glusterbot Title: Re: [Gluster-devel] Glusterfs SSL capability (at lists.gnu.org)
18:00 neofob joined #gluster
18:02 calum_ joined #gluster
18:02 rossi_ joined #gluster
18:30 khushildep joined #gluster
18:37 andreask joined #gluster
18:40 Derek joined #gluster
18:44 ^^rcaskey semiosis, is there a bug # for your request re: compiling in gluster support in qemu for 13.04 LTS? I saw you did create a PPA for that getup.
18:46 semiosis ^^rcaskey: there's no redhat BZ bug but there are a couple bugs related to getting qemu-with-gluster in ubuntu trusty over at launchpad
18:46 ^^rcaskey Yeah, I eman't for launchpad
18:47 ^^rcaskey is there a main inclusion request filed already?
18:47 semiosis https://bugs.launchpad.net/​cloud-archive/+bug/1246924
18:47 glusterbot Title: Bug #1246924 “qemu not built with GlusterFS support” : Bugs : ubuntu-cloud-archive (at bugs.launchpad.net)
18:47 semiosis https://bugs.launchpad.net/ubunt​u/+source/glusterfs/+bug/1274247
18:47 glusterbot Title: Bug #1274247 “[MIR] Glusterfs” : Bugs : “glusterfs” package : Ubuntu (at bugs.launchpad.net)
18:48 semiosis https://blueprints.launchpad.net/ubun​tu/+spec/servercloud-p-glusterfs-mir
18:48 glusterbot Title: GlusterFS MIR : Blueprints : Ubuntu (at blueprints.launchpad.net)
18:50 ^^rcaskey so it looks like it's in jdstrande's hands at this point then
18:51 semiosis you know as much as I do now
18:52 ^^rcaskey thank you much
18:53 semiosis yw
18:53 harish joined #gluster
18:53 rwheeler joined #gluster
18:53 semiosis i made the ppa so people can start testing this out now, and also in case this MIR doesnt happen
18:53 semiosis notice the MIR has been open for years
18:53 semiosis and while i'm happy to see this renewed interest in it, i'm aware that there's no guarantee
18:54 ^^rcaskey yeah I figured that was probably the case
18:56 failshell joined #gluster
18:59 haomaiwa_ joined #gluster
19:12 jikz joined #gluster
19:13 RedShift joined #gluster
19:15 kkeithley quique: after creating the volume, edit its volfile in /var/lib/glusterd/vols/$volname. In the protocol/server xlator defn, add the options transport.socket.ssl-own-cert, transport.socket.ssl-private-key, and transport.socket.ssl-ca-list; specifying the locations of your cert, key, and ca files.
19:22 neurodrone__ joined #gluster
19:26 zaitcev joined #gluster
19:27 pixelgremlins_ba joined #gluster
19:29 quique kkeithley: my volume is named testvol1 i edited: /var/lib/glusterd/vols/testvol1/testvol1.glus​ter1.int.domain.com.mnt-gluster1-testvol1.vol (http://fpaste.org/77965/), stopped the volume, restarted glusterd and then started the volume and got: http://fpaste.org/77966/
19:29 glusterbot Title: #77965 Fedora Project Pastebin (at fpaste.org)
19:33 kkeithley what are the permissions on the files and the directory? If it's like ssh, as I think it is, it should be finicky about those.
19:34 neurodrone__ joined #gluster
19:34 quique kkeithley: all I did was move the three ssl files from /etc/ssl (where it was working) to /opt/working_ssl, and add those three options in the vol file (i'm assuming that was the right vol file).
19:34 quique kkeithley: perms: http://fpaste.org/77969/
19:34 glusterbot Title: #77969 Fedora Project Pastebin (at fpaste.org)
19:36 quique kkeithley: there's a file called testvol1-fuse.vol, but there's no protocol/server xlator def so it has to be testvol1.gluster1.int.domain.​com.mnt-gluster1-testvol1.vol
19:36 _pol joined #gluster
19:38 glusterbot New news from newglusterbugs: [Bug 1066128] glusterfsd crashes with SEGV during catalyst run <https://bugzilla.redhat.co​m/show_bug.cgi?id=1066128>
20:05 cjanbanan joined #gluster
20:29 Derek joined #gluster
20:45 daMaestro joined #gluster
20:47 Derek_ joined #gluster
20:49 diegows joined #gluster
20:55 cp0k joined #gluster
20:56 tdasilva left #gluster
20:56 _pol joined #gluster
20:57 zerick joined #gluster
21:02 jobewan joined #gluster
21:06 cp0k Hey guys, after recently upgrading my production env to Gluster 3.4.2, I no longer get output to commands like 'gluster volume status' and 'gluster volume rebalance volname status'
21:06 cp0k each time the exit code is 146
21:14 andreask joined #gluster
21:24 rfortier1 joined #gluster
21:24 cp0k any idea why this may be happening?
21:27 dbruhn cp0k, anything in the logs?
21:27 dbruhn and did you restart all of the gluster services after the upgrade?
21:34 ktosiek joined #gluster
21:35 jporterfield joined #gluster
21:42 _pol joined #gluster
21:45 plarsen joined #gluster
21:48 jporterfield joined #gluster
21:51 cp0k dbruhn: yes I fired up the new instances of glusterd after upgrading to 3.4.2 on the storage nodes
21:53 dbruhn cp0k, from my understanding the glusterfsd services don't restart with the glusterd service, that might be part of the issue
21:53 cp0k dbruhn: I have 4 storage nodes, with a replica count of 2
21:53 primechu_ joined #gluster
21:54 cp0k dbruhn: do you recommend I restart glusterfs on the storage nodes?
21:57 wrale i'm setting up a gluster cluster.. (spoke in here about it last week)... six hosts, which will service an hpc-like cluster..... three networks.. public (1GbE), storage (on shared 10GbE) and compute (on shared 10GbE)... I'd like to serve files (via NFS and native clients) on the pubic and compute VLANs... i was told last week that replication and such happens from the client side.. (cont.)
21:57 rwheeler joined #gluster
21:58 wrale 1: does this mean having a separate network for storage replication (the storage VLAN, above) is unneeded (even for Qos)?  ... 2: will i have any issues with these boxes being multi-homed, storage network nonsense aside?  how should dns handle this, for example?
22:00 DV joined #gluster
22:00 wrale 3: can i set everything up with just the public network at this time, and transition to utilizing the other VLANs later (storage and/or compute)
22:00 wrale *?
22:00 sputnik13 joined #gluster
22:02 JoeJulian wrale: Use ,,(hostnames) and split-horizon dns. Each hostname should resolve to an address that's on the network for which it's resolving.
22:02 glusterbot wrale: Hostnames can be used instead of IPs for server (peer) addresses. To update an existing peer's address from IP to hostname, just probe it by name from any other peer. When creating a new pool, probe all other servers by name from the first, then probe the first by name from just one of the others.
22:02 semiosis 1. maybe.  2. maybe. 3. yes
22:02 semiosis and i was about to add what JoeJulian just said.  hostnames++!
22:03 psyl0n joined #gluster
22:04 wrale cool.. thanks.. so something like node-n1.something.com (pub) vs. node-n1.compute (priv) would work
22:06 JoeJulian ip netns exec vlan1 host server1; ip netns exec vlan2 host server1; output: 10.0.0.1, 10.1.0.1 (assuming server1 is on 10.0.0.1 for vlan1 and 10.1.0.1 for vlan2)
22:06 semiosis well, when you first set up gluster, you'll want to use hostnames for the peers & bricks
22:07 semiosis ...and not change that later.  just change the IPs behind those hostnames, possibly depending on where the request comes from (split-horizon)
22:08 JoeJulian (can you tell I've been doing a lot of openvswitch lately? :D )
22:08 wrale :) i think i get it... will need to do some research though..
22:08 wrale (which is expected)
22:11 JoeJulian jclift_: really? .so.6.0.0? I've got .so.0.0.0
22:11 JoeJulian jclift_: but I'm not building from git.
22:12 wrale cautiously reading this: http://andreas-lehr.com/blog/archives/612-​glusterfs-in-multi-home-environments.html
22:12 jclift_ JoeJulian: Yeah, that's cut-n-pasted from the ls -la and rpm -ql output
22:13 jclift_ JoeJulian: On CentOS 6.5 anyway
22:13 JoeJulian jclift_: btw... the .so in -devel is just a symlink to 0.0.0 in the 3.4.2 rpms
22:13 jclift_ Found a bug on F19 that doesn't let make glusterrpms compile on F19/EL7 atm.
22:13 jclift_ Prob fix that tomorrow.  Found the offending commit, but haven't had time to investigate
22:14 jclift_ JoeJulian: Yeah.  Figuring it's not a big deal.  At the same time, it's an easy fix if it's needed as I'm already adjusting the files in those rpms
22:16 harish joined #gluster
22:23 wrale i think this confused me more.. lol.. : https://bugzilla.redhat.com/show_bug.cgi?id=831699
22:23 glusterbot Bug 831699: low, unspecified, ---, jdarcy, NEW , Handle multiple networks better
22:30 rpowell1 joined #gluster
22:31 aquagreen joined #gluster
22:32 rpowell2 joined #gluster
22:33 wrale okay.. so i'm just writing this out to see if i understand... supposing for a moment that i skip DNS server configuration, i should just create an introspective cluster by rendering a mutli-homed /etc/hosts file.. this would list FQDNs for each interface with their respective IPs... i could skip giving any one IP a short name..when it comes time to probe for peers, i should probe using the FQDN associated with the VLAN on which i'd like to place t
22:33 wrale he internal-to-cluster traffic... for instance, 'gluster peer probe node-n1.storage.local'.. once the nodes know about one another over the storage net, i could proceed to create volumes from hosts on the other vlans?  :) maybe
22:33 failshel_ joined #gluster
22:33 wrale (that /etc/hosts would be the same for all hosts, an index of the entire cluster)
22:34 JoeJulian your peer names and server names as part of the brick definition ,,(glossary) need to match.
22:34 glusterbot A "server" hosts "bricks" (ie. server1:/foo) which belong to a "volume"  which is accessed from a "client"  . The "master" geosynchronizes a "volume" to a "slave" (ie. remote1:/data/foo).
22:35 JoeJulian So if you define the peer as "node-n1.storage.local" it will need to be called that and resolve to the correct IP address regardless of which network the client is on.
22:35 JoeJulian That would mean that you'll need different hosts files for each network.
22:36 wrale *thinking*.. thanks for helping me out here
22:37 JoeJulian No worries. I remember the mental gymnastics I went through figuring out clustering four years ago.
22:39 cp0k I found a old .glusterfs.broken/ dir in the root of my gluster mount. This is old metadata from an old Gluster setup which I no longer need. Is there any harm in removing this directory directly on the storage nodes rathen than removing it properly via the FUSE mount point?
22:40 JoeJulian cp0k: Read ,,(split brain) and remove the gfid symlink associated with that directory at the same time and you should be okay.
22:40 glusterbot cp0k: I do not know about 'split brain', but I do know about these similar topics: 'split-brain', 'splitbrain'
22:40 JoeJulian @split-brain
22:40 glusterbot JoeJulian: To heal split-brain in 3.3+, see http://joejulian.name/blog/fixin​g-split-brain-with-glusterfs-33/ .
22:41 JoeJulian @alias "split-brain" "split brain"
22:41 glusterbot JoeJulian: The operation succeeded.
22:44 cp0k thanks
22:44 cp0k I'll just remove the dir from the mount point, as Gluster prefers
22:50 cjanbanan joined #gluster
22:52 cp0k while removing the .glusterfs.broken from the mount point, would there be any harm in adding new storage nodes and running a rebalance in parallel?
22:58 semiosis if there is harm, it's a bug
22:58 semiosis supposed to be able to do that stuff online
22:58 cp0k semiosis: thanks
22:58 gdubreui joined #gluster
23:02 _pol joined #gluster
23:04 _pol_ joined #gluster
23:14 wrale hmm.. okay, so after reading and thinking, let's see if i'm making more sense of this.... the server side of gluster is truly only concerned with one identity for each peer(+brick)... that could be short, FQDN or IP address ... (plus path to brick as applicable).... clients need to use the same exact identity, whatever it is, whatever the ingress (to server) network maybe.... this means that any time a FDQN is chosen as that identity (even if reso
23:14 wrale lved from short name by /etc/resolv.conf search entry), this FDQN must be what the client uses to connect, again, regardless of the source network, VLAN, etc... hmm... example identity: node-n1.something.com..... so we need to use split-horizon dns to inform hosts that are natively (and perhaps only) in the private (V)LAN that hostname node-n1.something.com is something like 192.168.0.32, instead of what public clients (resolving node-n1.something
23:14 wrale .com) would resolve as some IP in public IP space (e.g. 65.55.58.201)... (sorry for the long message)
23:15 wrale *may be
23:15 badone joined #gluster
23:17 wrale so a bind server listening on say 192.168.0.2, with a zone for something.com pointing toward addresses in 192.168.0.0/24 is what is needed (in addition to the more "public" dns).. eh..
23:18 wrale (authoritatively)
23:18 wrale (no idea how the reverse records could play into this)
23:22 wrale in short: all clients use the same FQDN (or short name) for their brick addresses.. this FQDN can then resolve (via split dns) to whatever IP (and thus network) the client is hoping to traverse... (?)
23:23 divbell .oO( can i have some money now? )
23:23 wrale sure
23:23 wrale we all can have money :)
23:24 JoeJulian wrale: Correct.
23:24 JoeJulian referring to your understanding, not the random money query.
23:24 wrale :) i was hoping this was the case
23:25 wrale JoeJulian: thank you.. now, i need to think about how to bring this about with FreeIPA
23:25 wrale (perhaps)
23:26 JoeJulian The best way is with free IPA...
23:26 wrale lol.. nice
23:26 JoeJulian ... or free stout would be my preference...
23:26 wrale red hat rebranded freeipa as just ipa .. lol
23:27 saltsa joined #gluster
23:27 badone joined #gluster
23:28 wrale i think i'll stop here for the day.. snow and ice outside.. woohoo!.. thanks again
23:28 JoeJulian You're welcome.
23:30 primechuck joined #gluster
23:37 jporterfield joined #gluster
23:44 failshell joined #gluster
23:57 fidevo joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary