Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-08-05

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:00 PaulCuzner left #gluster
00:00 haomaiwa_ joined #gluster
00:07 Pupeno joined #gluster
00:29 auzty joined #gluster
00:34 Pupeno joined #gluster
00:51 _dist joined #gluster
01:10 B21956 joined #gluster
01:27 Lee1092 joined #gluster
01:33 lyang0 joined #gluster
01:36 cyberswat joined #gluster
01:47 ilbot3 joined #gluster
01:47 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
01:48 gildub joined #gluster
01:49 nzero joined #gluster
01:51 julim joined #gluster
01:57 pppp joined #gluster
01:58 nangthang joined #gluster
01:59 cyberswat joined #gluster
02:11 harish joined #gluster
02:12 harish_ joined #gluster
02:13 cyberswat joined #gluster
02:23 jcastillo joined #gluster
02:29 gem joined #gluster
02:46 aaronott joined #gluster
02:53 Pupeno_ joined #gluster
03:03 nzero joined #gluster
03:05 kdhananjay joined #gluster
03:14 prg3 joined #gluster
03:14 scubacuda joined #gluster
03:32 atinm joined #gluster
03:35 calisto joined #gluster
03:39 kotreshhr joined #gluster
03:41 bharata-rao joined #gluster
03:46 shubhendu joined #gluster
03:47 plarsen joined #gluster
03:52 sakshi joined #gluster
03:56 TheSeven joined #gluster
04:00 itisravi joined #gluster
04:02 kotreshhr left #gluster
04:08 RameshN joined #gluster
04:10 meghanam joined #gluster
04:13 deepakcs joined #gluster
04:17 overclk joined #gluster
04:19 gem joined #gluster
04:23 yazhini joined #gluster
04:25 rafi joined #gluster
04:31 jwd joined #gluster
04:34 jwaibel joined #gluster
04:35 ppai joined #gluster
04:36 jiffin joined #gluster
04:36 kotreshhr joined #gluster
04:44 ndarshan joined #gluster
04:45 overclk joined #gluster
04:47 ramteid joined #gluster
04:52 elico joined #gluster
04:53 kanagaraj joined #gluster
04:56 nbalacha joined #gluster
05:00 aravindavk joined #gluster
05:00 hagarth joined #gluster
05:01 vimal joined #gluster
05:02 prabu joined #gluster
05:03 TvL2386 joined #gluster
05:04 vmallika joined #gluster
05:06 hgowtham joined #gluster
05:08 haomaiwa_ joined #gluster
05:12 kshlm joined #gluster
05:13 Manikandan joined #gluster
05:17 dusmant joined #gluster
05:22 sripathi1 joined #gluster
05:27 Bhaskarakiran joined #gluster
05:27 corretico joined #gluster
05:27 pppp joined #gluster
05:38 ashiq joined #gluster
05:38 prabu joined #gluster
05:40 Lee- joined #gluster
05:49 atinm joined #gluster
05:52 haomaiwa_ joined #gluster
06:01 ramky joined #gluster
06:01 jwd joined #gluster
06:05 aaronott joined #gluster
06:16 kdhananjay joined #gluster
06:21 vimal joined #gluster
06:28 nangthang joined #gluster
06:33 ramky joined #gluster
06:35 raghu joined #gluster
06:36 pcaruana joined #gluster
06:39 gem joined #gluster
06:44 atinm joined #gluster
06:48 Saravana_ joined #gluster
07:03 itisravi joined #gluster
07:07 [Enrico] joined #gluster
07:09 karnan joined #gluster
07:13 raghu joined #gluster
07:14 atinm joined #gluster
07:15 ghenry joined #gluster
07:16 Slashman joined #gluster
07:16 gem joined #gluster
07:16 kotreshhr joined #gluster
07:18 arao joined #gluster
07:18 overclk joined #gluster
07:19 Manikandan joined #gluster
07:21 PaulCuzner joined #gluster
07:22 PaulCuzner left #gluster
07:24 PaulCuzner joined #gluster
07:25 PaulCuzner left #gluster
07:28 ppai joined #gluster
07:30 Backer_ joined #gluster
07:31 Backer_ Hi Pranith
07:32 skoduri joined #gluster
07:32 overclk joined #gluster
07:39 fsimonce joined #gluster
07:40 ctria joined #gluster
07:49 gem joined #gluster
08:01 arcolife joined #gluster
08:06 kovshenin joined #gluster
08:06 ajames-41678 joined #gluster
08:32 shaunm joined #gluster
08:32 anil joined #gluster
08:41 aravindavk joined #gluster
08:45 maveric_amitc_ joined #gluster
08:46 Saravana_ joined #gluster
08:49 dusmant joined #gluster
08:50 Lee_ joined #gluster
08:54 elico joined #gluster
08:56 ashiq joined #gluster
08:59 aravindavk joined #gluster
09:03 ramky joined #gluster
09:14 LebedevRI joined #gluster
09:20 Saravana_ joined #gluster
09:23 nsoffer joined #gluster
09:25 pranithk joined #gluster
09:25 pranithk Backer_: hi
09:25 pranithk Backer_: are you the same backer who raised https://bugzilla.redhat.com/show_bug.cgi?id=1236050
09:25 glusterbot Bug 1236050: high, high, ---, pkarampu, ASSIGNED , Disperse volume: fuse mount hung after self healing
09:26 Manikandan joined #gluster
09:32 Saravana_ joined #gluster
09:33 aravindavk joined #gluster
09:35 txomon|fon joined #gluster
09:35 txomon|fon hi, I am trying to make gluster replication work, but it just won't happen
09:35 txomon|fon I am having all kind of issues here :(
09:40 atinm txomon|fon, aravindavk can help you on this
09:41 txomon|fon The steps I did was, I created a distributed volume
09:41 txomon|fon then I added another brick
09:42 txomon|fon then I deleted that brick, which left something wrong in gluster because although I launched a gluster volume remove-brick start, it didn't move any file
09:43 atinm did you do a remove-brick commit post start?
09:43 txomon|fon anyway, I added a new brick with gluster volume add-brick bdml replica 2 machine2:m/o/untpoint
09:45 txomon|fon atinm, yes
09:45 txomon|fon I am pasting all the session
09:50 txomon|fon atinm, https://gist.github.com/anonymous/72efe06db048ae3d1a8f
09:50 glusterbot Title: Gluster operations · GitHub (at gist.github.com)
09:50 txomon|fon or was it aravindavk ?
09:51 kotreshhr joined #gluster
09:52 atinm txomon|fon, whats the issue here?
09:52 atinm txomon|fon, can you point that to me?
09:54 txomon|fon atinm, the labsvmh02 machine is not replicating
09:54 nsoffer joined #gluster
09:54 txomon|fon moreover, if I do stat on any file that was replicated in the old labsvmh02 machine, I will get an error
09:55 txomon|fon oh
09:55 txomon|fon it suddenly replicated
09:55 txomon|fon but the heal was failing :/
09:56 atinm itisravi, ^^
09:57 ctria joined #gluster
09:57 txomon|fon I tried to launch the replication but everything was failing
09:57 txomon|fon s/launch/trigger/
09:57 glusterbot What txomon|fon meant to say was: I tried to trigger the replication but everything was failing
09:57 txomon|fon now it's working, maybe it was syncing metadata?
09:58 txomon|fon but why there wasn't any message about the replication being in progress?
10:01 ws2k3 joined #gluster
10:02 ndarshan joined #gluster
10:02 pranithk txomon|fon: could you do the following?
10:02 pranithk txomon|fon: Could you bring the new brick down
10:02 txomon|fon pranithk, but it's working now...
10:03 pranithk txomon|fon: then do mkdir <non-existent-dir> and do rmdir <non-existent-dir>
10:03 aravindavk joined #gluster
10:03 rwheeler joined #gluster
10:03 txomon|fon pranithk, indeed I did that, mkdir on the new brick
10:03 txomon|fon and it magically created the file inside
10:03 pranithk txomon|fon: gluster volume start <volname> force
10:03 pranithk txomon|fon: I have been meaning to complete the documentation, but couldn't I will complete it this week
10:04 pranithk txomon|fon: it is very bad idea to just do add-brick like you did...
10:04 txomon|fon so gluster volume start ... force would trigger a sync?
10:04 pranithk txomon|fon: It should not be done while mount is doing operations...
10:04 txomon|fon pranithk, but labsvmh02 was deleted and then readded
10:05 * aravindavk is now looking at txomon|fon msgs
10:05 pranithk txomon|fon: when we create and delete directory on the mount point's root then it will mark that the brick needs healing
10:05 pranithk aravindavk: it is not georep
10:05 aravindavk pranithk: ok
10:05 pranithk aravindavk: it is replication. I am handling :-)
10:05 aravindavk pranithk: cool. :)
10:06 txomon|fon pranithk, but I also wrote heal full and it didn't do anything :(
10:06 pranithk txomon|fon: which version of gluster?
10:06 txomon|fon 3.7.2
10:07 pranithk txomon|fon: yeah, this is a known issue. That is the reason for the documentation....
10:07 pranithk txomon|fon: the steps above will make sure everything is fine. I suggest you do them...
10:09 txomon|fon pranithk, I had already done them, just testing things, I took down the volume
10:09 txomon|fon then created a new folder just in case it could trigger the sync
10:09 txomon|fon and then started it
10:09 txomon|fon and many other steps
10:09 txomon|fon so some happened to work
10:10 txomon|fon anyway, I would really appreciate if I could see the state of replication
10:10 pranithk txomon|fon: one question just to confirm....
10:10 fabiodive joined #gluster
10:10 pranithk txomon|fon: this new directory you created. Is it done on the mount point while the new brick was down?
10:11 fabiodive Hi there! Question: How is Gluster compared to Ceph? Thank you
10:11 txomon|fon the volume was down
10:11 txomon|fon the brick wasn't I suppose
10:12 pranithk txomon|fon: oh, volume shouldn't be down. You should just brick the new brick down...
10:12 pranithk s/just brick/just bring/
10:12 glusterbot What pranithk meant to say was: txomon|fon: oh, volume shouldn't be down. You should just bring the new brick down...
10:12 txomon|fon I meant I took it down, just in case =)
10:13 pranithk txomon|fon: but bringing the volume down won't make this work... only the new brick should be down when we create this directory. That will make sure everything will work
10:13 pranithk txomon|fon: new directory should be created on the mount.
10:18 txomon|fon pranithk, just in that situation? I promise it's working now... maybe one of the heal full is working?
10:19 txomon|fon maybe mkdir-ing new folders, directly in the brick made the trick?
10:19 pranithk txomon|fon: okay :-). If it works, its fine. And if "gluster volume heal <volname> info" shows all zeros for number of entries then you are good to go
10:19 txomon|fon pranithk, it did when it wasn't OK :(
10:19 pranithk txomon|fon: it won't do the trick :-(
10:20 kotreshhr joined #gluster
10:21 pranithk txomon|fon: well that is what we need to fix. Basically when the volume already has replication, it keeps track of bricks that are good/bad. But because you moved from distribute to replicate, we need the steps above to make sure replication learns it. In future releases we will add this intelligence in add-brick command itself
10:22 txomon|fon perfect
10:22 txomon|fon I found it was impossible to do the conversion from distributed to replicated, but seems my idea worked out...
10:22 txomon|fon btw, which is the real web page for docs?
10:23 txomon|fon because google links to several different versions...
10:23 pranithk txomon|fon: yeah, it is better to look at the logs in the github pages. My patch is still under review.... I will get it done as soon as possible
10:23 pranithk s/logs/docs/
10:23 glusterbot What pranithk meant to say was: txomon|fon: yeah, it is better to look at the docs in the github pages. My patch is still under review.... I will get it done as soon as possible
10:24 pranithk txomon|fon: https://github.com/gluster/glusterfs/tree/master/doc
10:24 glusterbot Title: glusterfs/doc at master · gluster/glusterfs · GitHub (at github.com)
10:25 txomon|fon ohhhhh
10:25 jcastill1 joined #gluster
10:25 txomon|fon I have been here http://www.gluster.org/community/documentation/index.php/Gluster_3.2:_Setting_Volume_Options all the time!!!
10:25 pranithk txomon|fon: that is EOL (End of Life)
10:26 pranithk txomon|fon: I will speak with some people to get the documentation properly done per version.... thanks for your feedback
10:26 ndarshan joined #gluster
10:26 pranithk atinm: Are you doing the volunteering today on #gluster?
10:28 txomon|fon pranithk, it's because of google, when you search something, you usually end up there
10:28 atinm pranithk, not really as my slot is on Monday and Thursday
10:29 pranithk atinm: cool.. I will try to bring up the documentation per release thing in the meeting today. Just telling you as well incase I forget
10:29 pranithk atinm: oh! there is an agenda etherpad right?
10:29 pranithk atinm: Let me add this to the agenda.
10:29 atinm pranithk, yes :)
10:30 jcastillo joined #gluster
10:30 atinm pranithk, https://public.pad.fsfe.org/p/gluster-bug-triage
10:30 glusterbot Title: FSFE Etherpad: public instance (at public.pad.fsfe.org)
10:30 pranithk atinm: thanks atin
10:30 atinm pranithk, sorry
10:31 atinm pranithk, https://public.pad.fsfe.org/p/gluster-community-meetings
10:31 glusterbot Title: FSFE Etherpad: public instance (at public.pad.fsfe.org)
10:31 pranithk atinm: thanks again :-)
10:31 atinm pranithk, yw :)
10:32 pranithk atinm: added it
10:32 pranithk atinm: got a meeting to run to
10:33 dusmant joined #gluster
10:33 kshlm joined #gluster
10:33 kshlm joined #gluster
10:33 aravindavk joined #gluster
10:35 jcastillo joined #gluster
10:35 akay1 joined #gluster
10:38 gildub joined #gluster
10:38 kshlm joined #gluster
10:55 beardyjay joined #gluster
10:59 Guest53280 left #gluster
11:04 [Enrico] joined #gluster
11:05 firemanxbr joined #gluster
11:18 Bhaskarakiran joined #gluster
11:21 pppp joined #gluster
11:22 dusmant joined #gluster
11:27 spalai joined #gluster
11:32 dgandhi joined #gluster
11:33 DV joined #gluster
11:34 ira joined #gluster
11:37 Manikandan joined #gluster
11:37 jcastill1 joined #gluster
11:42 jcastillo joined #gluster
11:44 jvandewege joined #gluster
11:46 n-st left #gluster
11:46 Trefex joined #gluster
11:47 kshlm joined #gluster
11:49 julim joined #gluster
11:49 shubhendu joined #gluster
11:50 ndarshan joined #gluster
11:50 gildub joined #gluster
11:51 cyberswat joined #gluster
11:54 overclk joined #gluster
11:54 rafi joined #gluster
11:55 kdhananjay1 joined #gluster
11:55 rafi joined #gluster
11:57 shyam joined #gluster
11:57 poornimag joined #gluster
11:58 kdhananjay joined #gluster
11:59 jrm16020 joined #gluster
11:59 rafi REMINDER: Gluster Community meeting starting in another 1 minutes in #gluster-meeting
12:01 rjoseph joined #gluster
12:02 kshlm joined #gluster
12:04 kanagaraj joined #gluster
12:06 nbalacha joined #gluster
12:07 unclemarc joined #gluster
12:08 surabhi joined #gluster
12:10 chirino joined #gluster
12:11 harish_ joined #gluster
12:22 DV_ joined #gluster
12:25 Manikandan joined #gluster
12:25 itisravi_ joined #gluster
12:28 spalai left #gluster
12:28 spalai joined #gluster
12:33 ndarshan joined #gluster
12:34 surabhi joined #gluster
12:35 shubhendu joined #gluster
12:36 jcastill1 joined #gluster
12:41 jcastillo joined #gluster
12:42 Bhaskarakiran joined #gluster
12:43 Romeor WazaAaa
12:47 afics joined #gluster
12:55 spalai left #gluster
12:58 kovshenin joined #gluster
13:06 nsoffer joined #gluster
13:10 kotreshhr left #gluster
13:13 theron joined #gluster
13:13 aravindavk joined #gluster
13:16 DV joined #gluster
13:17 atinm joined #gluster
13:28 ndevos kdhananjay++ rafi++ thanks!
13:28 glusterbot ndevos: kdhananjay's karma is now 3
13:28 glusterbot ndevos: rafi's karma is now 1
13:32 rafi joined #gluster
13:35 cuqa_ joined #gluster
13:36 calavera joined #gluster
13:39 klaxa|work joined #gluster
13:40 corretico joined #gluster
13:41 nsoffer joined #gluster
13:42 arcolife joined #gluster
13:44 calavera joined #gluster
13:46 B21956 joined #gluster
13:47 hgowtham joined #gluster
13:47 kdhananjay joined #gluster
13:50 vmallika joined #gluster
13:51 pppp joined #gluster
13:53 dgandhi joined #gluster
13:53 yosafbridge joined #gluster
13:53 _Bryan_ joined #gluster
13:55 dijuremo joined #gluster
14:04 cyberswat joined #gluster
14:05 magamo left #gluster
14:08 overclk joined #gluster
14:13 bennyturns joined #gluster
14:18 cuqa_ joined #gluster
14:22 nbalacha joined #gluster
14:26 ctria joined #gluster
14:27 plarsen joined #gluster
14:32 lkoranda joined #gluster
14:37 nangthang joined #gluster
14:37 Trefex joined #gluster
14:56 DV_ joined #gluster
14:58 _dist joined #gluster
15:00 jiffin joined #gluster
15:01 kanagaraj joined #gluster
15:02 jiffin joined #gluster
15:04 ccha joined #gluster
15:04 ccha joined #gluster
15:05 cyberswat joined #gluster
15:06 ctria joined #gluster
15:08 sankarshan_ joined #gluster
15:08 k-ma joined #gluster
15:10 DV joined #gluster
15:11 nangthang joined #gluster
15:14 jobewan joined #gluster
15:15 stefanos joined #gluster
15:16 kshlm joined #gluster
15:17 ctria joined #gluster
15:20 theron joined #gluster
15:25 kdhananjay joined #gluster
15:26 wushudoin joined #gluster
15:40 nzero joined #gluster
15:40 ccha joined #gluster
15:40 ccha joined #gluster
15:41 calavera joined #gluster
15:42 moss joined #gluster
15:43 k-ma joined #gluster
15:44 moss Hi there.  I installed GlusterFS on 2 Ubuntu 14.04 LTS nodes and set up replication.  I "messed up" a few times before, and completely removed GlusterFS, rebooted, and reinstalled it.  For some reason, I am getting this error when I attempt to mount the volume thats being replicated: ERROR: /var/www is in use as a brick of a gluster volume
15:45 nzero https://joejulian.name/blog/glusterfs-path-or-a-prefix-of-it-is-already-part-of-a-volume/
15:45 glusterbot Title: GlusterFS: {path} or a prefix of it is already part of a volume (at joejulian.name)
15:46 moss thank you :D
15:46 moss Now I get "Mount failed. Please check the log file for more details."
15:47 DV joined #gluster
15:48 moss ah now its more clear:D thanks so much!
15:49 nzero no problem
15:50 moss nzero: can you take a look at this for me? http://dpaste.com/04J4EKC
15:50 glusterbot Title: dpaste: 04J4EKC (at dpaste.com)
15:51 vimal joined #gluster
15:51 laxdog_ joined #gluster
15:51 nzero what does your mount command look like
15:51 moss nzero: sudo mount -t glusterfs 192.168.198.149:www-volume /var/www
15:52 nzero is www-volume the right name of your volume
15:53 Manikandan joined #gluster
15:53 moss derp.
15:53 moss nzero man. you know one of those days where your brain just doesn't work? today is that day.  thank you very much my good sir :)
15:53 nzero sure
15:54 nzero i had a day like that last week
15:54 squizzi_ joined #gluster
15:54 laxdog_ Hey all. I'm having an issue with write speeds over fuse mounted gluster. I'm using RDMA, but seeing write speeds of only ~10MB/s. Is there somewhere I can start to debug this? Writing directly to disks gives expected speeds (>120MB/s) and the links speeds on IB tests are ok (>13Gb/s)
15:58 timotheus1 joined #gluster
15:59 nzero i haven't used rdma but am considering it in the future, so am curious about your experience with it. did you follow the docs here?: http://www.gluster.org/community/documentation/index.php/RDMA_Transport
16:01 laxdog_ Yes. I followed those and a few other things I found via RHS docs.
16:01 laxdog_ It's all fully functional. Just very slow.
16:02 nzero i can't really help. i'd just be throwing darts at you. the channel is very slow this morning
16:03 nzero maybe check again in a bit when a dev is on
16:08 laxdog_ nzero: ok thanks
16:10 kshlm joined #gluster
16:18 pppp joined #gluster
16:18 shyam left #gluster
16:19 hagarth joined #gluster
16:20 cholcombe joined #gluster
16:29 poornimag joined #gluster
16:35 overclk joined #gluster
16:42 moss nzero: Do you know why there isn't an init script for Ubuntu 14.04 for glusterfs?
16:42 pranithk joined #gluster
16:45 vincent_vdk joined #gluster
16:47 calavera joined #gluster
16:54 aravindavk joined #gluster
16:54 vimal joined #gluster
16:59 theron_ joined #gluster
17:01 shaunm joined #gluster
17:02 _maserati joined #gluster
17:05 Slashman joined #gluster
17:05 _maserati Romeor, come back from vacation! its been like a month!
17:13 atinm joined #gluster
17:23 jwd joined #gluster
17:25 wushudoin| joined #gluster
17:26 _dist joined #gluster
17:30 wushudoin| joined #gluster
17:34 nsoffer joined #gluster
17:36 shyam joined #gluster
17:43 mz joined #gluster
17:45 mz hi everyone
17:47 kotreshhr joined #gluster
17:50 gem joined #gluster
17:52 _maserati hi sir
17:53 mz I'm new to glusterfs, just trying it out and I'm looking to some answers (which I wasn't able to locate with google :) ) how this is the right place to ask
17:53 mz I'm new to glusterfs, just trying it out and I'm looking to some answers (which I wasn't able to locate with google :) ) hope this is the right place to ask
17:54 _maserati deja vu
17:54 mz :)
17:55 _maserati Yes, this place is perfect for asking your questions. The devs are very helpful and willing. Just post your questions (dont ask to ask questions ;P) and in time hopefully someone answers
17:55 mz ok, my issue is this: i'm using 3 node distributed volume (no striping, currently no replication) and was wondering if there is any way from the client perspective (fuse) to somehow get info about where specific file on volume landed (which brick)
17:57 mz my scenario is large (>1GB) files, write once, read many times, but after writing it once I need to perform a check on the file, which is pretty CPU expensive... I'd like to do it locally on the brick actually hosting the file to avoid transferring it back to client
17:58 mz but I'm not sure if this is possible from fuse client... maybe libgfapi ?
18:01 l0uis @which brick
18:01 glusterbot l0uis: To determine on which brick(s) a file resides, run getfattr -n trusted.glusterfs.pathinfo $file through the client mount as root.
18:01 _maserati That is a tiny bit over my current knowledge level of gluster. But what I can say is that be careful how you access those files. Writing directly to a brick is equivilent to dd'ing a chunk of data in the middle of your harddrive, filesystem be damned.
18:02 moss Does anyone know why there is no init script for glusterfs on ubuntu 14.04 ?
18:03 _maserati @which brick
18:03 glusterbot _maserati: To determine on which brick(s) a file resides, run getfattr -n trusted.glusterfs.pathinfo $file through the client mount as root.
18:03 _maserati cool! (i was just testing if gb would respond to me)
18:04 mz no, I just want to perform read on the file directly from brick, no writes
18:04 mz these go properly through client
18:05 _maserati I would verify that wouldn't cause any issues, reading does after all modify timestamps and such
18:05 _maserati though the client is installed on your servers by default, so you could just set up local mount points
18:05 mz just tried that, and got the result I expected, so thanks a lot for that hint
18:06 mz ok, another good point, this way no unnecessary network traffic
18:06 mz that was very helpful!
18:06 _maserati yay i helped someone!  That's like... 2 now   :)
18:06 mz keep up the good work :)
18:07 _maserati im trying! I got a gluster environment dumped on me a couple weeks ago so im learning as fast as i can
18:07 _maserati fwiw, i love gluster
18:07 mz btw I gotta say so far I'm really impressed with gluster (it's only been a week or so) and it's easy intuitive and as far as I can say now - really performs
18:08 mz I'll try to push it a little harder when we upgrade our eth network and disk arrays but so far looking really good
18:09 theron joined #gluster
18:09 _maserati agreed. the only thing I wish it could do is sort of active/active between two geographical sites. Quickly that is, of course I can setup a replicated brick at the other site but unfortunately my developer's code doesn't like the wait time
18:10 _maserati I'm running gluster on top of a 3PAR SAN array. 15k rpm SAS... many disks. it's wicked fast
18:10 mz ok, so speaking of fast
18:10 mz ...
18:10 mz I'm wondering what FS to base it on
18:10 l0uis xfs is the recommended fs
18:10 mz I've read that for large files and parallel IO XFS would be the right choice
18:10 _maserati so far I haven't really heard of people not recommending xfs
18:11 _maserati oh yeah, for large files, absolutely xfs
18:12 mz one thing that I notices is missing (and would love that) is hierarchical storage management, some policy based way of demoting and archiving data to slower storages
18:12 mz I've seen it's under dev so hopefully it's gonna arrive some time after we go to prod :)
18:15 victori joined #gluster
18:16 victori_ joined #gluster
18:18 ipmango joined #gluster
18:18 elico joined #gluster
18:19 elico joined #gluster
18:22 shyam left #gluster
18:34 nzero joined #gluster
18:36 _maserati mz, yeah that is going to be very cool.
18:46 _maserati mz, when they do get teired storage management, I am going to hack up a way to archive data that hasn't been accessed after a certain amount of time to tape while still keeping it managed like a filesystem so if a request does eventually come through for the data it will pull it from tape :)
18:46 nsoffer joined #gluster
18:52 kotreshhr left #gluster
19:01 nzero joined #gluster
19:05 mbukatov joined #gluster
19:07 lkoranda joined #gluster
19:08 shyam joined #gluster
19:17 nzero joined #gluster
19:29 calavera joined #gluster
19:33 theron_ joined #gluster
19:34 Guest61235 joined #gluster
19:39 Guest61235 We have a 6 node gluster.  The nodes were up and running for many monhts (in production), and there was a mistake on our side that the /etc/hosts file was wiped, and then we put the hosts file back.  After that I found 3 of them have very high cpu 80%, and the rest of the 3 are not busy at all.   When requests are being randomly being dispatched to any of the nodes, we observed a 40% error rate as well as overall slowness.   What could cause
19:39 Guest61235 nodes to be so busy?  and what causes the errors?
19:40 nzero moss, did you get your question answered?
19:45 nzero Guest61235 did you check to see if all the peers are connecting using gluster peer status
19:46 nzero Guest61235 did any of the hostnames change from what cluster peer status reports
19:48 calisto joined #gluster
20:00 nzero joined #gluster
20:23 nzero joined #gluster
20:27 DV joined #gluster
20:28 nzero joined #gluster
20:29 moss nzero: no not yet
20:29 moss nzero: it appears that there is no script in /etc/init.d/ for glusterfs
20:29 moss and that i must start it manually...
20:29 Guest61235 no host name changes
20:30 JoeJulian moss: what distro?
20:30 autoditac joined #gluster
20:30 Guest61235 brb, on a conf call.
20:31 moss JoeJulian: Ubuntu 14.04-LTS
20:31 JoeJulian The job is named glusterfs-server, iirc.
20:31 nzero moss, this is what i use. i installed from mesosphere package: http://pastebin.com/VJksy7aZ
20:31 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
20:31 JoeJulian And ubuntu uses upstart.
20:32 moss you are correct.
20:32 moss heh
20:32 doekia joined #gluster
20:33 moss JoeJulian: how do i set it to start on boot?
20:33 JoeJulian It's ubuntu. Everything is set to do that by default. Hell, unconfigured services start on install for that matter. One of the reasons I hate ubuntu.
20:34 theron joined #gluster
20:34 JoeJulian If it's not starting, I'm guessing that means that the network job fails so the network emit never happens and any jobs that require it would not start.
20:35 JoeJulian upstart is the easiest init replacement there is wrt debugging
20:35 * JoeJulian paves his way to hell by lying.
20:36 JoeJulian (upstart is the other reason I hate ubuntu)
20:37 moss JoeJulian: thanks man, i think you've given me enough info to fix this :D
20:37 moss one last question
20:38 JoeJulian I won't hold you to that.
20:38 coredump joined #gluster
20:38 moss Do i need to mount the volumes on both nodes in order for it to replicate? In other words, i was trying to write to /mnt/volume1 and it doesnt replicate, but when i mount it and write to the mounted folder (/example for instance) it writes to both volumes
20:39 JoeJulian @glossary
20:39 glusterbot JoeJulian: A "server" hosts "bricks" (ie. server1:/foo) which belong to a "volume"  which is accessed from a "client"  . The "master" geosynchronizes a "volume" to a "slave" (ie. remote1:/data/foo).
20:40 JoeJulian You have to write to the volume through a client. Writing to a brick is like dd'ing to some random block on the hard drive and expecting xfs to know what to do with that.
20:40 _maserati Im gonna ask this question without googling first because if there is a de facto "better than the rest" that's the only answer i want... is there a decent web UI for monitoring a glusterfs?
20:40 JoeJulian no
20:40 JoeJulian and yes
20:40 _maserati damnit
20:40 _maserati and yay
20:41 JoeJulian But there's no de-facto and, in fact, there are as many opinions about the correct monitoring system as there are monitoring systems.
20:41 _maserati Simple, quaint?
20:41 _maserati throw it on top of nginx?
20:41 JoeJulian I'm currently using consul
20:41 _maserati i'll check it out, thanks
20:42 _maserati oh boy, do you have a URL you can throw me? consul is apparently the name of a billion things
20:42 JoeJulian lol
20:42 nzero https://www.consul.io/
20:42 glusterbot Title: Consul by HashiCorp (at www.consul.io)
20:42 _maserati thank you sir
20:43 JoeJulian yep, that one
20:43 JoeJulian I wonder...
20:43 JoeJulian @lucky consul
20:43 glusterbot JoeJulian: https://www.consul.io/
20:43 _maserati ahh not designed specifically for gluster. Does it have like a gluster addon or something ?
20:44 JoeJulian It's just another monitoring tool. I use python scripts to scrape data from "gluster volume status", "gluster volume heal $vol info", etc.
20:44 _maserati ah. I was hoping some gluster fan developed a cool little dashboard for it is all
20:44 JoeJulian oh wait... there is something...
20:45 _maserati consul is pretty cool, i like the demo
20:46 JoeJulian https://github.com/pcuzner/gstatus
20:46 glusterbot Title: pcuzner/gstatus · GitHub (at github.com)
20:46 JoeJulian Sounds like you might like that.
20:46 _maserati sweet! that's perfect
20:46 _maserati thanks JoeJulian
20:46 kovshenin joined #gluster
20:46 _maserati @glusterbot give JoeJulian +1
20:46 autoditac joined #gluster
20:46 _maserati I can't figure gbot out =(
20:48 JoeJulian JoeJulian++
20:48 glusterbot JoeJulian: Error: You're not allowed to adjust your own karma.
20:57 JonathanD joined #gluster
20:59 ipmango joined #gluster
21:01 _maserati JoeJulian++
21:01 glusterbot _maserati: JoeJulian's karma is now 21
21:03 _maserati Now I get why C++ keeps getting karma, lol
21:03 glusterbot _maserati: C's karma is now 8
21:06 aaronott joined #gluster
21:08 _maserati Now that I know how to give karma, i'm going to have to slip in a few extra ones for JoeJulian++ because you've helped me a good 5-6 times now
21:08 glusterbot _maserati: JoeJulian's karma is now 22
21:09 n-st joined #gluster
21:21 nage joined #gluster
21:29 6A4AAACZX joined #gluster
21:30 badone_ joined #gluster
21:30 Lee_ joined #gluster
21:35 calavera_ joined #gluster
22:13 nzero joined #gluster
22:21 calisto joined #gluster
22:47 calavera joined #gluster
22:55 nzero joined #gluster
22:58 calavera joined #gluster
23:03 shyam joined #gluster
23:34 calisto joined #gluster
23:43 nangthang joined #gluster
23:45 victori joined #gluster
23:47 harish joined #gluster
23:50 julim joined #gluster
23:57 thangnn_ joined #gluster
23:59 cyberswat joined #gluster
23:59 jbautista- joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary