Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-02-20

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:03 Pupeno_ joined #gluster
00:05 h4rry joined #gluster
00:10 T3 joined #gluster
00:24 PorcoAranha joined #gluster
00:35 JonathanD joined #gluster
00:46 T3 joined #gluster
00:51 bala joined #gluster
01:03 kshlm joined #gluster
01:12 Rapture joined #gluster
01:22 johnbot Just ran into an issue where I was able to mount my gluster file system via the command line but not by using an entry in fstab 'mount: unknown filesystem type 'glusterfs' when running mount -a. Installing glusterfs-fuse fixed the problem but I'm wondering why the mount on the command line succeeded.
01:25 JoeJulian My guess would be that you didn't specify -t glusterfs from the command line and you instead mounted from nfs.
01:26 johnbot JoeJulian: makes sense thanks
01:26 johnbot JoeJulian: I forgot NFS was running
01:26 bennyturns joined #gluster
01:37 duyt1001 joined #gluster
01:39 duyt1001 left #gluster
01:59 h4rry joined #gluster
02:01 bharata-rao joined #gluster
02:07 sprachgenerator joined #gluster
02:29 gildub joined #gluster
02:32 elico joined #gluster
02:33 necrogami joined #gluster
02:37 rjoseph joined #gluster
02:48 ilbot3 joined #gluster
02:48 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
03:05 h4rry joined #gluster
03:44 Pupeno joined #gluster
03:49 rjoseph joined #gluster
03:49 kanagaraj joined #gluster
03:59 shubhendu joined #gluster
03:59 purpleidea joined #gluster
04:05 MacWinner joined #gluster
04:05 gem joined #gluster
04:07 PaulCuzner joined #gluster
04:14 itisravi joined #gluster
04:19 RameshN joined #gluster
04:19 atinmu joined #gluster
04:30 aravindavk joined #gluster
04:36 deepakcs joined #gluster
04:45 nbalacha joined #gluster
04:49 T3 joined #gluster
04:50 rafi joined #gluster
04:51 anoopcs joined #gluster
04:52 anil joined #gluster
04:54 schandra joined #gluster
04:55 badone_ joined #gluster
04:56 kdhananjay joined #gluster
05:01 spandit joined #gluster
05:02 plarsen joined #gluster
05:03 ndarshan joined #gluster
05:05 ppai joined #gluster
05:08 karnan joined #gluster
05:09 atalur joined #gluster
05:10 raghu joined #gluster
05:12 soumya joined #gluster
05:18 lalatenduM joined #gluster
05:22 meghanam joined #gluster
05:23 jiffin joined #gluster
05:24 telmich joined #gluster
05:27 kdhananjay joined #gluster
05:28 prasanth_ joined #gluster
05:36 dusmant joined #gluster
05:48 overclk joined #gluster
05:48 kumar joined #gluster
06:01 bala joined #gluster
06:07 alan^ joined #gluster
06:13 aravindavk joined #gluster
06:17 SOLDIERz_ joined #gluster
06:38 T3 joined #gluster
06:42 karnan joined #gluster
06:44 nshaikh joined #gluster
06:45 smohan joined #gluster
06:54 mbukatov joined #gluster
06:57 glusterbot News from newglusterbugs: [Bug 1194546] Write behind returns success for a write irrespective of a conflicting lock held by another application <https://bugzilla.redhat.co​m/show_bug.cgi?id=1194546>
06:58 anrao joined #gluster
07:10 aravindavk joined #gluster
07:13 jtux joined #gluster
07:21 aravindavk joined #gluster
07:41 [Enrico] joined #gluster
07:45 alefauch joined #gluster
07:46 Philambdo joined #gluster
08:02 nshaikh joined #gluster
08:03 SOLDIERz_ joined #gluster
08:14 ndarshan joined #gluster
08:25 ricky-ticky joined #gluster
08:26 T3 joined #gluster
08:28 glusterbot News from newglusterbugs: [Bug 1194559] Make compatible version of python bindings in  libgfapi-python compared to libgfapi C apis. <https://bugzilla.redhat.co​m/show_bug.cgi?id=1194559>
08:30 elico joined #gluster
08:37 fsimonce joined #gluster
08:38 spandit joined #gluster
08:44 ndarshan joined #gluster
08:47 nbalacha joined #gluster
08:47 vee_ joined #gluster
08:49 vee_ Hi was wondering if someone could help.. trying to heal a brick (in a distribute-replicate set up), over the weekend I had to heal 11 million files but now 'gluster volume heal <myvolume> info' shows 500 random gfid files such as '<gfid:19163923-9c54-4944-a191-ad7aeceac676>' and they're not healing - has anyone else had this issue?
08:50 meghanam joined #gluster
08:50 LebedevRI joined #gluster
08:50 rjoseph joined #gluster
08:51 atalur joined #gluster
09:04 ndarshan|lunch joined #gluster
09:04 Slashman joined #gluster
09:10 aravindavk joined #gluster
09:15 ppai joined #gluster
09:17 alefauch left #gluster
09:23 ndarshan joined #gluster
09:30 vee_ Hi was wondering if someone could help.. trying to heal a brick (in a distribute-replicate set up), over the weekend I had to heal 11 million files but now 'gluster volume heal <myvolume> info' shows 500 random gfid files such as '<gfid:19163923-9c54-4944-a191-ad7aeceac676>' and they're not healing - has anyone else had this issue?
09:37 kovshenin joined #gluster
09:40 ppai joined #gluster
10:03 papamoose1 joined #gluster
10:06 misc joined #gluster
10:07 T3 joined #gluster
10:10 SOLDIERz__ joined #gluster
10:10 shubhendu joined #gluster
10:12 deniszh joined #gluster
10:18 spandit joined #gluster
10:18 SOLDIERz___ joined #gluster
10:18 nbalacha joined #gluster
10:23 rjoseph joined #gluster
10:30 pkoro joined #gluster
10:32 Norky joined #gluster
10:37 lalatenduM joined #gluster
10:38 _shaps_ joined #gluster
10:39 itpings hi guys
10:39 itpings gluster howto with urbackup ready
10:39 itpings will be posting it soon
10:41 samppah cool, waiting for it :(
10:41 samppah oops
10:41 samppah :)
10:41 itpings lol
10:41 itpings but first i would like the gluster team to proof read it
10:42 samppah i'm sure that community can help with that
10:42 itpings yup sure
10:44 ndevos itpings: send it to gluster-devel@gluster.org and request a review there - once done, you should post it to the users list :)
10:44 itpings okiz
10:44 itpings just created my account
10:44 itpings oh
10:44 itpings i thought a wiki place is good to start
10:47 elico joined #gluster
10:50 badone__ joined #gluster
10:53 ndevos itpings: sure, you can write it in the wiki, and pass the URL to your page along
10:58 itpings no now i am sending mail
11:04 raz joined #gluster
11:05 o5k joined #gluster
11:05 itpings done
11:05 itpings i hope all goes well and this howto will be published
11:05 itpings but lets see
11:10 o5k joined #gluster
11:16 [Enrico] joined #gluster
11:20 shubhendu joined #gluster
11:22 RameshN joined #gluster
11:23 T0aD joined #gluster
11:25 ppai joined #gluster
11:28 hagarth joined #gluster
11:33 diegows joined #gluster
11:46 kkeithley1 joined #gluster
11:46 kkeithley1 left #gluster
11:46 kkeithley1 joined #gluster
11:49 bene2 joined #gluster
11:49 uebera|| joined #gluster
12:00 o5k_ joined #gluster
12:03 o5k__ joined #gluster
12:07 monotek joined #gluster
12:10 [Enrico] joined #gluster
12:24 ira joined #gluster
12:25 kanagaraj joined #gluster
12:30 edwardm61 joined #gluster
12:34 o5k_ joined #gluster
12:43 hagarth joined #gluster
12:44 atalur joined #gluster
12:46 social joined #gluster
12:51 lalatenduM joined #gluster
12:59 glusterbot News from newglusterbugs: [Bug 1194640] Tracker bug for Logging framework expansion. <https://bugzilla.redhat.co​m/show_bug.cgi?id=1194640>
13:00 jiffin joined #gluster
13:11 plarsen joined #gluster
13:13 jmarley joined #gluster
13:14 nbalacha joined #gluster
13:17 rjoseph joined #gluster
13:22 karnan joined #gluster
13:24 bernux joined #gluster
13:25 samppah itpings: where did you send it? :)
13:29 bernux left #gluster
13:29 bernux joined #gluster
13:35 RameshN joined #gluster
13:38 rjoseph joined #gluster
13:46 calisto joined #gluster
13:49 aurigus joined #gluster
13:52 bala joined #gluster
14:01 deepakcs joined #gluster
14:07 Gill joined #gluster
14:08 virusuy joined #gluster
14:08 virusuy joined #gluster
14:13 abyss^ joined #gluster
14:18 jvandewege joined #gluster
14:24 morse joined #gluster
14:25 abyss^ joined #gluster
14:26 elico joined #gluster
14:29 glusterbot News from newglusterbugs: [Bug 1181669] File replicas differ in content even as heal info lists 0 entries in replica 2 setup <https://bugzilla.redhat.co​m/show_bug.cgi?id=1181669>
14:32 georgeh-LT2 joined #gluster
14:33 diegows joined #gluster
14:35 kshlm joined #gluster
14:38 cmorandin joined #gluster
14:38 rwheeler joined #gluster
14:48 dusmant joined #gluster
14:49 bernux joined #gluster
14:52 shubhendu joined #gluster
14:56 ildefonso joined #gluster
15:02 bennyturns joined #gluster
15:04 wushudoin joined #gluster
15:07 lalatenduM joined #gluster
15:22 jobewan joined #gluster
15:33 deepakcs_ joined #gluster
15:33 coredump joined #gluster
15:36 nshaikh joined #gluster
15:45 rjoseph joined #gluster
15:45 theron joined #gluster
15:51 cmorandin joined #gluster
15:53 deepakcs joined #gluster
16:00 coredump joined #gluster
16:04 bernux Hello, I'm currently testing glusterfs and having real slow performance on small files (4k average size) example untar an archive containing 80000 files take 58 minutes on the gluster share and 4s on the same partition outside the gluster share
16:05 bernux I tried lots of voodoo tuning since 1 week but none have improve this
16:06 bernux dd from a client dd bs=4k count=200k if=/dev/zero of=/glusterdata conv=fdatasync
16:07 bernux give me 150MB/s
16:08 bernux sync a dir with small files (4k) from this client to the glustershare give me 0.5MB/s to 1MB/s
16:09 bernux the objective was to do a tile file server (geomatics) redundant, but sadly with this performance it's impossible to use gluster in prod
16:10 bernux does anyone have any hints ? or does my needs will nether fit with a glusterfs solution ?
16:13 Norky GlusterFS (currently) performs poorly with small files, or more precisely with many files, i.e. directories containing 1000s of files
16:14 Norky your dd can probably be improved by using a larger (1MB + ) block size, btw, but that doesn't really address your main concern
16:15 bernux the dd is limited by the performance of the second gluster server
16:16 bernux so I was good with 150MB/s
16:17 bernux but 0.5 is unworkable, with have 100k or Millions 4k files to update monthly
16:19 bernux Thanks Norky, this is what I was afraid of
16:20 Norky you might find it much better if you can sub divide your tiles into dirs of, say, 100 each
16:21 chirino joined #gluster
16:21 prasanth_ joined #gluster
16:21 Norky e.g. one dir for each minute of lat. and lon. (not that I have any idea of your tile size/scale)
16:22 Norky I'm not an expert btw, there might be others who can give you a better answer
16:22 Norky but certainly we've had trouble with enumerating directories containing 1000s of files
16:23 Norky it's basically because there's a round trip for each piece of metadata, which adds huge latency
16:26 theron joined #gluster
16:26 bernux I don't think we can change the date structure which is normalized but I'm gonna check
16:28 awerner joined #gluster
16:32 T3 joined #gluster
16:33 ron-slc joined #gluster
16:33 ildefonso joined #gluster
16:33 bernux Thank you for your hint Norky, think it's time to find an other solution for our particular needs
16:34 jbrooks joined #gluster
16:37 B21956 joined #gluster
16:38 T3 joined #gluster
16:40 _shaps_ joined #gluster
16:45 wkf joined #gluster
16:49 Norky okay, sorry it didn't work out, best of luck
16:49 Norky though feel free to hang around in channel in case one of the more knowledgeable folks has anything to add - I hate to worry that I've left you with bum/out of data information :)
16:51 Norky out of date*
16:53 inodb joined #gluster
16:53 Gill joined #gluster
16:54 o5k__ joined #gluster
16:58 theron joined #gluster
17:00 ron-slc joined #gluster
17:01 bernux no don't worry Norky, these are similar information to what I could find on the web
17:03 bernux I just try my luck here in case some one has a miracle tuning or the knowledge of a monstrous improvement on very small files performance in a next release
17:28 nshaikh joined #gluster
17:29 ildefonso joined #gluster
17:43 kaushal_ joined #gluster
17:50 JoeJulian bernux, Norky: The problem with that structure is only if you need to list the directory in order to load the tile. If you already know the name of your file and can just open it, you should be fine.
17:51 Gill joined #gluster
18:04 jbrooks joined #gluster
18:06 cmorandin joined #gluster
18:08 edong23 joined #gluster
18:15 Gill_ joined #gluster
18:19 bernux JoeJulian: that and when you have to write thousand of file in several subdir
18:19 bernux thousands of very small files
18:21 JoeJulian What's a "very small file"?
18:21 JoeJulian nm, I see
18:21 bernux 4k average size
18:22 jackdpeterson joined #gluster
18:23 JoeJulian Is extracting tar files the use case, or is that simply the way to populate the data?
18:26 bernux extracting tar with thousands or millions of small files happen every month for regular update
18:27 JoeJulian I ask because a lot of times people become fixated on a problem they have that isn't actually part of the problem they're trying to solve. If your goal is to serve those files to tens/hundreds/thousands of clients then the question should be, does it do that?
18:27 JoeJulian Your extract could be done to a different directory and then rotate it in to place when it's done.
18:31 bernux rotate is a good idea but not applicable, because only the new/update tile sare pre-generated so maybe in a particular dir with only 10 tiles in fact there is 100 other tiles already in the production
18:33 bernux some dir have 1000 subdirs with 3 or 4 more subdirs in each etc etc
18:33 bernux an I think the final dir where tiles are may have 5 to 100 tiles
18:34 JoeJulian Ok. What's your business concern with the amount of time it takes?
18:34 jbrooks joined #gluster
18:35 bernux yes a bit
18:35 bernux but the example I give 4s vs 58min was for a very small archive
18:35 bernux 1.1G
18:36 jmarley joined #gluster
18:37 bernux usually the monthly archive are from 3G to 10G
18:37 JoeJulian And if the process took 24 hours, what would the harm be? I'm not trying to downplay the issue, just understand the perspective.
18:38 Rapture joined #gluster
18:40 JoeJulian ... I'm thinking of ideas to mitigate it, but in order not to waste your time I feel like if I can understand your perspective I won't go town the wrong path.
18:40 JoeJulian or down the wrong path either...
18:41 bernux for the update maybe not but, if I compare with the "before", the untar of a real archive is approx. 30 to 40 min so if
18:41 bernux 4s = 58min with gluster
18:41 JoeJulian comparing apples to orchards
18:42 JoeJulian Ok, so the problem you're trying to solve with clustered storage is what?
18:42 bernux no the archive I untar is the same but smaller that those we usually untar
18:42 JoeJulian I do understand that.
18:43 bernux what I try to solve is be able to update our tiles in a acceptable time
18:43 JoeJulian But an apple is something you can eat. One person can grab an apple and eat it very efficiently. When you have thousands of people that want apples, you could hand them out from one resource point, or you could let them into the orchard.
18:44 bernux I give you an other example
18:44 JoeJulian Each person in the orchard would take a little bit more time to get their apple, but the entire job could be handled much more quickly.
18:44 bernux before trying with untar,
18:44 bernux I try to sync the dir from our dev to new gluster server
18:45 bernux I start it last week
18:45 bernux friday
18:45 JoeJulian So, if updating your tiles is the only problem you have, using a clustered filesystem is a bad idea. You should use a raid0 array of ssds.
18:46 bernux yesterday it wasn't near to be finished
18:46 bernux and that for only one set of tile
18:46 bernux usually I send it in the afternoon and the next day in the morning all is done for all our set
18:47 bernux I want a clusterd filesystem because the server with tile will share it to several other server
18:47 JoeJulian bingo
18:47 bernux and I don't want it to be my single point of failure
18:47 JoeJulian Does it do that job effectively?
18:48 bernux for what I test yes it does but I don't push this test when I see that the update processus will be too slow
18:49 JoeJulian got it.
18:49 ira joined #gluster
18:50 JoeJulian What tool do you use for syncing?
18:50 bernux rsync
18:51 bernux actually the process is :
18:51 JoeJulian When you rsync, do you limit the set of tiles you're comparing, or do you go through the entire tree?
18:52 bernux a server build a compressed archive with the delta, then it is end to all the server of all the environnement
18:53 bernux then those server decompress the archive
18:54 bernux for my test I try the entire tree first
18:54 JoeJulian And when you said, "I try to sync the dir from our dev to new gluster server" was that your production process, or was that an rsync?
18:54 bernux first I try an rsync like I do for every new server
18:55 bernux because there's approx 500G of tiles
18:56 bernux then when after one week I see it was not finished
18:56 bernux I try to make an archive of a small set of tiles
18:56 bernux send it to the gluster server
18:56 bernux then uncompress on it in the gluster share partition
18:57 JoeJulian So that's about 130k tiles in full?
18:58 bernux 81728 exactly
18:58 bernux when I uncompress I can see in itop the disk write never going more than 1MB/s
18:59 anrao joined #gluster
18:59 bernux if I uncompress in the same partition but not in the gluster share I got 300MB/s approx
18:59 JoeJulian There's some math not working and your performance stats do not seem to match what I've seen.
19:00 JoeJulian 500G / 81728 = 6M <> 4K
19:01 bernux there's a lot of symbolic link
19:01 bernux 2045 dir
19:02 bernux oh no
19:02 bernux 500G is not for 81728 tiles
19:03 bernux 81728 tiles = number of tiles in my 1.1G archive
19:05 bernux for a projection theres is 50Millions of tiles approx
19:06 bernux and more to come because the zoom max level are not pregenerated
19:06 bernux it grows when client goes in
19:09 samppah ss-win 30
19:09 samppah oops
19:18 PaulCuzner joined #gluster
19:20 jbrooks joined #gluster
19:23 JoeJulian bernux: This weekend, if everything goes as expected, I'll see if I can create a sample set of 50M 10k files and write them to a replicated gluster volume.
19:25 bernux ok thank you JoeJulian, I appreciate that
19:28 bernux for my part, like I said earlier, I try to think to an alternative solution because some production stuff wait after this new "architecture"
19:28 bernux alternative solution or alternative way of doing things
19:29 JoeJulian Sure.
19:29 JoeJulian The only idea off the top of my head would be to have several clients update different parts of the tree so you're spreading the wait times.
19:35 bernux like splitting the archive ?
19:35 JoeJulian Yes.
19:35 bernux complicated to apply because :
19:36 bernux level 0 = 1 tile
19:36 bernux level 1 = 4 tiles
19:36 bernux level 2 =16
19:36 bernux ....
19:36 bernux level 12= 64384 tiles
19:37 bernux level 13 = 227393 tiles
19:37 bernux etc etc
19:37 bernux until level 16
19:37 bernux so when update apply there are not equally split in each directory
19:38 bernux but maybe made one for update from 10 to 15
19:38 bernux and one for 16
19:40 bernux uncompress it localy from the gluster server in parallel would change things you think ?
19:44 JoeJulian In theory, yes, though it may require multiple mounts to see any improvement.
19:46 jbrooks joined #gluster
19:47 JoeJulian Most of the performance issues should be from lookup() which is going to try to find the file if it exists, then compare replicas to ensure they're not stale.
19:47 JoeJulian @lucky dht misses are expensive
19:47 glusterbot JoeJulian: http://joejulian.name/blog​/dht-misses-are-expensive/
19:47 JoeJulian see ^ for some potential performance impact of a file not existing or existing on the wrong dht subvolume.
19:49 bernux oh that's one of the website I already read ;-)
19:49 JoeJulian :)
19:50 JoeJulian btw... with rsync and gluster volumes, always use --inplace or you'll create tempfiles that are on the wrong dht subvolumes after they're renamed.
19:51 bernux ok I do two almost equal archive with my 1.1G
19:52 bernux I'm going to try to untar simultaneously  on the same mount
19:53 bernux and after on 2 separate mount
19:53 JoeJulian cool
20:34 Ginja joined #gluster
20:35 Ginja Hi Everyone, anyone know why the status of this commit was set to abandoned? http://review.gluster.org/#/c/9475/
20:35 Ginja AFAICT, this bug is still an issue in 3.6.2
20:36 JoeJulian bug 1184587
20:36 glusterbot Bug https://bugzilla.redhat.com:​443/show_bug.cgi?id=1184587 high, unspecified, ---, pkarampu, POST , rename operation failed on disperse volume with glusterfs 3.6.1
20:37 JoeJulian Oh, it says right in there, "Superseded by http://review.gluster.org/9420"
20:37 Ginja Ah nevermind, I see that it was actually included in http://review.gluster.org/#/c/9420/
20:37 Ginja Yup, haha, sorry.
20:38 kminooie joined #gluster
20:39 JoeJulian I added that note to the bug report.
20:39 Ginja Thanks :)
20:39 kminooie Hi everyone I was wondering if there is a way to run gluster-cli on a computer that is not part of any cluster ( I couldn't find anything in docs )
20:42 rwheeler joined #gluster
20:47 bernux untar ended on same mount
20:48 bernux 27min for one 52min for the second
20:48 inodb joined #gluster
20:53 bernux 2 untar send on 2 different mount
20:57 JoeJulian kminooie: Yes, unfortunately you can do that. bug 990284
20:57 glusterbot Bug https://bugzilla.redhat.com​:443/show_bug.cgi?id=990284 is not accessible.
20:58 JoeJulian seriously Red Hat? You said you won't fix it so why is it "secure"....
21:00 inodb joined #gluster
21:07 jbrooks joined #gluster
21:11 JoeJulian kminooie: http://oi61.tinypic.com/2gwgd8x.jpg
21:13 inodb joined #gluster
21:15 anrao joined #gluster
21:21 johnbot I asked this before but I lost the response in my IRC client. I want to move a few bricks over to my second gluster server (distributed pair) in order to better balance IO load. When I follow the currently exiting instructions and try running a replace-brick I get an error that the command is deprecated so my question is... If i'd like to remove all data from let's say 4 bricks on gluster server1 and migrate
21:21 johnbot them to new (completely empty) bricks added to gluster server2 what is the best command to run?
21:26 ndevos JoeJulian: I have no idea what to think of that bug... But I thought the option was now restricted to certain operations?
21:27 ndevos I use it in my automount setup to list volumes, and I would not like to loose that feature :)
21:31 kminooie thanks JoeJulian I don't know what to think of that bug either, but that is not what i was looking for. I was hopeing to be able to get stuff like peer status or volume info, etc .. on a monitoring node that itself is not part of any cluster. 'gluster' command doesn't have a -h option ( as in host, not help ) so I was wondering to if there was way to basically point it to a cluster?
21:34 ndevos kminooie: its not called "-h", but --remote-host=$hostname
21:35 ndevos it is how vdsm-gluster (from oVirt) does most of the status checks, I think
21:35 JoeJulian ndevos: Needs to be secured with a key exchange, imho.
21:36 ndevos JoeJulian: sure, some restructions would be good
21:36 JoeJulian I may have just ranted a little bit on my blog...
21:36 ndevos lol
21:52 kminooie thanks ndevos: I am using rpm packages from gluster.org ( I am on 3.6.2  on fedora 20 ) and using --remote-host  I see this error message in cli.log :
21:52 kminooie E [rpc-transport.c:266:rpc_transport_load] 0-rpc-transport: /usr/lib64/glusterfs/3.6.2​/rpc-transport/socket.so: cannot open shared object file: No such file or directory
21:54 kminooie I also see this one ( a few line before the error message ) D [rpc-transport.c:188:rpc_transport_load] 0-rpc-transport: missing 'option transport-type'. defaulting to "socket"
21:54 kminooie I obviously have no Idea what I am talking about but shouldn't this be tcp?
21:59 ndevos kminooie: I think you need to install these packages: glusterfs-cli, glusterfs-libs, glusterfs
21:59 ndevos "socket" is tcp, it would say "unix" for domain sockets
22:00 kminooie I have the first two , glusterfs was required by glusterfs-cli , but I am gonna give it a shot, thanks
22:00 wushudoin joined #gluster
22:00 kminooie that was ...  glusterfs was NOT required ....
22:03 ndevos glusterfs is the package that contains the socket.so and other xlators that do the TCP/RPC communication, maybe it should move to the -libs package?
22:10 johnbot JoeJulian: would you mind providing your thoughts on my question above?
22:11 kminooie yup after installing glusterfs it works. thanks everyone
22:14 JoeJulian johnbot: type "/topic" to see where we keep the channel logs
22:15 JoeJulian But my actual thoughts... I'm pissed at the devs for that whole deprecation. I don't think there is any valid way to reduce the size of a gluster volume.
22:15 johnbot Joejulian: thanks, haven't used IRC since 8 or so years ago so....
22:15 johnbot JoeJulian: and that's what confused the hell out of me.... ;)
22:16 JoeJulian If the command didn't warn you, it would still require a working rebalance command. Apparently that's not something that's important to ever fix.
22:16 * JoeJulian seems a little grumpy today.
22:17 johnbot JoeJulian: in addition, the documentation has more holes than swiss cheese and a golf course combined.
22:17 JoeJulian That's an open source project problem.
22:18 JoeJulian Find an open source project with more detailed documentation and I'll show you a project whose documentation is out of date.
22:18 johnbot True...
22:18 JoeJulian But! You ARE part of the community. Any deficiencies you find are most easily corrected by you. :D
22:19 johnbot JoeJulian: I knew that was coming! ;) I'll try my best.
22:20 johnbot JoeJulian: Since I do all my gluster stuff in AWS I'd be happy to assist with filling in missing bits and general best practices.
22:21 JoeJulian There's been a similar request by Rackspace to provide best practices documentation for their product.
22:21 johnbot saw that float past the mailing list.
22:21 johnbot For instance, http://www.gluster.org/community/document​ation/index.php/Getting_started_setup_aws
22:22 johnbot I don't think this is quite 100% accurate now that VPC's are in play "Using EBS volumes and Elastic IP’s is also recommended in production. "
22:22 johnbot For instance....
22:22 johnbot Private IP
22:23 johnbot IP's in VPC's never change to unless you need those public IPs.......
22:23 JoeJulian Or if your vm gets trashed and you need to recreate it.
22:23 johnbot JoeJulian: What's the best way to contibute documentation, howtos etc?
22:24 johnbot Trolling around website now...
22:24 JoeJulian The official docs are part of the source code repo and go through the development process ,,(hack)
22:24 glusterbot The Development Work Flow is at http://www.gluster.org/community/docume​ntation/index.php/Development_Work_Flow
22:24 johnbot JoeJulian: excellent thanks!
22:24 JoeJulian The wiki I've been fighting tooth and nail to keep around even though everyone keeps wanting to do something "cooler".
22:25 JoeJulian It works, people can edit it, it's easy.
22:26 cmorandin joined #gluster
22:26 johnbot yes but is it 'in the cloud', it needs to be 'in the cloud'
22:27 JoeJulian I agree. Everything must be in the cloud. Keeps me employed.
22:27 johnbot JoeJulian: me too.
22:29 johnbot Totally off topic but related to an issue i had where one of my ubuntu glusterfs servers in AWS kept corrupting partitions. (I know it wasn't gluster) but, is there any practical reason to even create partitions to host the bricks? Not sure what it's buying me in aws land.
22:30 JoeJulian nope
22:32 johnbot Well then..... next time.......
22:34 JoeJulian What part of the world are you in, johnbot?
22:34 johnbot JoeJulian: California-Pasadena-Caltech
22:35 JoeJulian Mmm, not up here near Seattle then.
22:36 johnbot JoeJulian: nope, but Seattle is great.
22:39 Rapture joined #gluster
22:41 T3 joined #gluster
22:55 kminooie so I don't know if there is something wrong with my setup or this is the intended behavior, but when I run 'gluster peer status' it doesn't show the node that I am on ( and now that I am doing it remotely, doesn't show the peer to which I am connected.) this happens on all the nodes in my cluster. am I missing something ?
22:57 JoeJulian Not missing anything. Peer status shows the peers that host is connected to. It doesn't show itself.
23:11 kminooie I have this mini cluster with 2 nodes that are replicate of each other today one of the volumes just stop working ( can not be mounted from other computers ) the volume is started but heal fails  and when I run  gluster volume status detail : this is what i get http://ur1.ca/jrolv  as far as I can say the fs on brick-2 is fine. I don't know what I need to do now?
23:14 JoeJulian kminooie: That's odd. Do you have any other bricks on brick-2?
23:14 dgandhi joined #gluster
23:15 kminooie yes but on different harddrive
23:15 dgandhi joined #gluster
23:16 kminooie actually I am getting the same thing for the other volume as well ( on brick-2 )
23:17 kminooie http://ur1.ca/jron9
23:20 JoeJulian Check dmesg
23:27 kminooie nothing unusual in dmesg
23:27 JoeJulian wierd.
23:27 kminooie I am restarting the server ...
23:28 JoeJulian That was going to be my suggestion.
23:34 T3 joined #gluster
23:36 Sjors joined #gluster
23:43 kminooie same thing after restart --> File System          : N/A
23:44 kminooie I don't know if this makes a different but 2 days ago I upgraded from 3.2 to 3.6
23:45 kminooie it is was working fine after upgrade once I unmount and remounted all the client
23:45 JoeJulian Could be a huge difference if glusterfsd didn't get killed and restarted.
23:46 kminooie no I made sure I stoped all the daemons before I upgraded
23:47 kminooie this issue started today
23:47 JoeJulian should brick-2:/srv/gluster/adsimages be smaller than brick-1:/srv/gluster/adsimages?
23:48 kminooie the underlying file system is smaller yes ( smaller partition on that computer )
23:48 JoeJulian fpaste /proc/mounts
23:48 JoeJulian on brick-2
23:51 kminooie http://ur1.ca/jrotr
23:53 JoeJulian If it's on /opt, what's /srv/gluster/adsimages ? A symlink?
23:53 kminooie yup
23:53 JoeJulian I don't think that should work.
23:53 JoeJulian Try a bind mount.
23:53 kminooie was working for the past 2 years  :)
23:53 JoeJulian You upgraded and gluster got smarter (sort of)
23:54 kminooie ok lets see ...
23:54 JoeJulian It now tries to check to make sure your mount hasn't died.
23:56 JoeJulian Another option if that doesn't work is "gluster volume replace-brick  brick-2:/srv/gluster/adsimages brick-2:/opt/adsimages commit force"

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary