Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-07-09

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:01 cristov joined #gluster
00:43 sjm joined #gluster
00:44 vpshastry joined #gluster
00:48 verdurin joined #gluster
00:54 gmcwhistler joined #gluster
01:11 sauce joined #gluster
01:22 harish joined #gluster
01:40 vpshastry joined #gluster
01:51 sjm left #gluster
01:51 _2_panda3 joined #gluster
01:51 _2_panda3 hi
01:51 glusterbot _2_panda3: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
01:52 _2_panda3 :)
01:52 gildub joined #gluster
02:11 haomaiwa_ joined #gluster
02:12 bala joined #gluster
02:21 harish joined #gluster
02:26 Pupeno_ joined #gluster
02:26 haomaiw__ joined #gluster
02:35 Peter3 JoeJulian: I am seeing the df vs du issue again and this time i see the du -b is diff between nfs and glusterfs mount
02:37 Peter3 http://pastie.org/9370048
02:37 glusterbot Title: #9370048 - Pastie (at pastie.org)
02:46 ThatGraemeGuy joined #gluster
02:54 bharata-rao joined #gluster
02:59 aravindavk joined #gluster
03:36 kumar joined #gluster
03:37 atinmu joined #gluster
03:39 nbalachandran joined #gluster
03:52 RameshN joined #gluster
03:52 aravindavk joined #gluster
03:55 itisravi joined #gluster
04:01 plarsen joined #gluster
04:06 _ndevos joined #gluster
04:15 kanagaraj joined #gluster
04:18 vpshastry joined #gluster
04:20 shubhendu joined #gluster
04:26 deepakcs joined #gluster
04:31 nishanth joined #gluster
04:37 dtrainor joined #gluster
04:46 sahina joined #gluster
04:49 nshaikh joined #gluster
04:52 ndarshan joined #gluster
05:01 sputnik1_ joined #gluster
05:06 vpshastry joined #gluster
05:18 ppai joined #gluster
05:18 prasanth joined #gluster
05:26 kdhananjay joined #gluster
05:32 gmcwhist_ joined #gluster
05:49 kshlm joined #gluster
05:49 hagarth joined #gluster
05:50 saurabh joined #gluster
05:54 davinder17 joined #gluster
05:54 stickyboy joined #gluster
05:54 vkoppad joined #gluster
06:31 ricky-ti1 joined #gluster
06:32 vimal joined #gluster
06:32 karnan joined #gluster
06:37 spandit joined #gluster
06:38 gmcwhist_ joined #gluster
06:43 vu joined #gluster
06:45 monotek joined #gluster
07:00 davinder16 joined #gluster
07:03 ctria joined #gluster
07:04 rastar joined #gluster
07:05 rastar_ joined #gluster
07:05 eseyman joined #gluster
07:08 keytab joined #gluster
07:14 ekuric joined #gluster
07:14 RaSTar joined #gluster
07:15 glusterbot New news from newglusterbugs: [Bug 1114604] [FEAT] Improve SSL support <https://bugzilla.redhat.co​m/show_bug.cgi?id=1114604>
07:15 RaSTar joined #gluster
07:21 nbalachandran joined #gluster
07:26 ppai joined #gluster
07:29 pvh_sa joined #gluster
07:33 fsimonce joined #gluster
07:36 qdk joined #gluster
07:45 glusterbot New news from newglusterbugs: [Bug 1117655] 0-mem-pool: invalid argument with fio --thread <https://bugzilla.redhat.co​m/show_bug.cgi?id=1117655>
07:58 rjoseph joined #gluster
08:00 aravindavk joined #gluster
08:02 fdsfsdf joined #gluster
08:02 mshadle joined #gluster
08:13 hybrid512 joined #gluster
08:29 calum_ joined #gluster
08:34 lalatenduM joined #gluster
08:48 hagarth joined #gluster
09:04 ppai joined #gluster
09:12 meghanam joined #gluster
09:19 RameshN joined #gluster
09:25 andreask joined #gluster
09:27 meghanam_ joined #gluster
09:27 sahina joined #gluster
09:28 shubhendu joined #gluster
09:31 ninkotech joined #gluster
09:32 ninkotech__ joined #gluster
09:35 bala1 joined #gluster
09:39 kdhananjay joined #gluster
09:43 nishanth joined #gluster
09:48 haomaiwang joined #gluster
09:52 ndarshan joined #gluster
09:53 meghanam_ joined #gluster
09:53 meghanam joined #gluster
10:01 haomaiwa_ joined #gluster
10:04 kshlm joined #gluster
10:09 ninkotech joined #gluster
10:09 deepakcs joined #gluster
10:09 ninkotech__ joined #gluster
10:16 kshlm joined #gluster
10:27 Slashman joined #gluster
10:40 nbalachandran joined #gluster
10:45 glusterbot New news from newglusterbugs: [Bug 1077406] Striped volume does not work with VMware esxi v4.1, 5.1 or 5.5 <https://bugzilla.redhat.co​m/show_bug.cgi?id=1077406>
10:47 gildub joined #gluster
10:56 edward1 joined #gluster
11:01 meghanam joined #gluster
11:01 meghanam_ joined #gluster
11:01 karnan joined #gluster
11:07 pvh_sa hey there, if I'm doing multiple bricks on a single server, should they be in different lvols and different filesystems? or can I just create them as different directories of the same filesystem?
11:14 bene2 joined #gluster
11:14 ppai joined #gluster
11:14 pdrakeweb joined #gluster
11:15 pvh_sa seems if you use the same FS then free space doesn't show up right, so I'm making 1 lvol per brick
11:19 sahina joined #gluster
11:21 calum_ joined #gluster
11:27 ndarshan joined #gluster
11:30 _ndevos pvh_sa: some use the 'one filesystem for many bricks' approach and call it 'thin provisioning'
11:32 ndevos pvh_sa: but yes, the size of the volume depends on the size of the underlying filesystem that is used for the bricks, so if you want 'correct' or 'understandable' values, use one filesystem per brick
11:33 nishanth joined #gluster
11:36 shubhendu joined #gluster
11:37 LebedevRI joined #gluster
11:38 bala1 joined #gluster
11:39 pvh_sa ndevos, thanks.
11:39 pvh_sa so on my bricks, do I run glusterd *and* glusterfsd ? what's the difference between them?
11:40 qdk joined #gluster
11:40 ndevos glusterd is the management daemon, it handles all the requests from the 'gluster' commandline, and passes the volume layout to clients that mount/use the volume
11:40 ndevos glusterd also starts additional gluster services, like glusterfsd, the gluster-nfs server, etc
11:41 ndevos glusterfsd is a process that listens on a tcp-port, and handles all I/O for a specific brick
11:41 harish_ joined #gluster
11:41 ndevos glusterfs processes are clients (fuse-mount, nfs-server, self-heal, ...) and 1st connect to glusterd, and then the client know to which bricks to talk
11:43 pvh_sa ok thanks, so... both need to be running then? because gluster will talk to glusterd to do 'create' and then clients will talk to glusterfsd
11:43 ndevos yes, when you create+start a volume, glusterd starts glusterfsd
11:44 ndevos when you mount a volume, glusterfs-fuse connects to glusterd, gets the volume layout, and uses that volume layout to connect to the bricks (glusterfsd)
11:44 pvh_sa ok, so there is no need to enable glusterfsd manually (as in chkconfig) because glusterd will start that?
11:44 ndevos yes, thats correct
11:45 ndevos the glusterfsd sevice or systemd job is used to stop the glusterfsd processes on reboot/shutdown/etc
11:45 ndevos s/sevice/service/
11:45 glusterbot What ndevos meant to say was: the glusterfsd service or systemd job is used to stop the glusterfsd processes on reboot/shutdown/etc
12:00 kanagaraj joined #gluster
12:01 RameshN joined #gluster
12:07 theron joined #gluster
12:08 theron joined #gluster
12:09 pdrakeweb joined #gluster
12:12 itisravi joined #gluster
12:13 itisravi joined #gluster
12:27 tdasilva joined #gluster
12:28 sputnik1_ joined #gluster
12:31 RameshN_ joined #gluster
12:34 pdrakeweb joined #gluster
12:42 Thilam joined #gluster
12:44 chirino joined #gluster
12:45 stickyboy joined #gluster
12:48 julim joined #gluster
13:00 japuzzo joined #gluster
13:04 _Bryan_ joined #gluster
13:06 TvL2386 joined #gluster
13:14 sahina joined #gluster
13:16 ndarshan joined #gluster
13:16 glusterbot New news from newglusterbugs: [Bug 1117822] Tracker bug for GlusterFS 3.6.0 <https://bugzilla.redhat.co​m/show_bug.cgi?id=1117822>
13:16 hagarth joined #gluster
13:17 kkeithley ,,(ports}
13:17 glusterbot I do not know about 'ports}', but I do know about these similar topics: 'ports'
13:17 kkeithley ,,(ports)
13:17 glusterbot glusterd's management port is 24007/tcp and 24008/tcp if you use rdma. Bricks (glusterfsd) use 24009 & up for <3.4 and 49152 & up for 3.4. (Deleted volumes do not reset this counter.) Additionally it will listen on 38465-38467/tcp for nfs, also 38468 for NLM since 3.3.0. NFS also depends on rpcbind/portmap on port 111 and 2049 since 3.4.
13:19 deeville joined #gluster
13:21 shubhendu joined #gluster
13:22 hagarth left #gluster
13:22 hagarth joined #gluster
13:22 nbalachandran joined #gluster
13:22 Thilam hi there, I've just update my 3.5 to 3.5.1 glusterfs and I always have an issue when I whant to enable quota
13:23 Thilam gluster volume quota projets enable
13:23 deeville Is there a way to specify hosts to root-squash? Or is there no way around this global setting?
13:23 Thilam quota: Could not start quota auxiliary mount
13:23 Thilam Quota command failed. Please check the cli logs for more details
13:23 pvh_sa hey there, how should I specify my mount command so that it is doesn't have a single point of failure (the server named in the mount command)?
13:23 Thilam in logs [2014-07-09 13:20:09.066609] E [cli-cmd-volume.c:1036:gf_c​li_create_auxiliary_mount] 0-cli: Failed to create auxiliary mount directory /var/run/gluster/projets/. Reason : No such file or directory
13:23 sjm joined #gluster
13:24 Thilam when i create the folder it's ok, but I suppose it's not a normal behaviour
13:24 Thilam in fact It's the /var/run/gluster folder which is missing
13:25 nishanth joined #gluster
14:12 ilbot3 joined #gluster
14:12 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
14:13 bennyturns joined #gluster
14:15 ndevos Thilam: maybe you can figure out what <gfid:7d74ef56-b073-421e-b9e6-1c1d1927cfd0> corresponds too, and heal it
14:16 pvh_sa joined #gluster
14:16 glusterbot New news from newglusterbugs: [Bug 1117851] DHT :- data loss - file is missing on renaming same file from multiple client at same time <https://bugzilla.redhat.co​m/show_bug.cgi?id=1117851>
14:16 ndevos Thilam: <gfid:7d74ef56-b073-421e-b9e6-1c1d1927cfd0> should be a symlink on the brick, on an other brick, check <brick>/.glusterfs/7d/74/7d74ef​56-b073-421e-b9e6-1c1d1927cfd0
14:17 chirino_m joined #gluster
14:17 ndevos Thilam: the target of the symlink, is the name of the directory on the volume, you should be able to heal that, by 'stat <mountpoint>/target/of/symlink'
14:18 ndevos the <mountpoint> should be a fuse-mount, not the mountpoint of the brick
14:18 mortuar joined #gluster
14:19 davinder16 joined #gluster
14:19 ndevos Thilam: dependin on the directory-depth of the symlink, you may need to heal the parent dirs first, like 'stat <mountpoint>/target', 'stat <mountpoint>/target/of'
14:20 cjanbanan1 joined #gluster
14:20 VerboEse joined #gluster
14:21 jobewan joined #gluster
14:23 cjanbanan1 Where can I find information how GlusterFS reads from a replicated brick? According to the profiling data, it seems to choose only one of bricks, but how is this decision taken and does it last until that particular brick is removed?
14:23 theron joined #gluster
14:25 rwheeler joined #gluster
14:27 ricky-ticky1 joined #gluster
14:27 diegows joined #gluster
14:35 Thilam thx ndevos
14:36 Thilam but do you have seen my previous message about quota activation ?
14:36 Thilam 15:22
14:37 Thilam I tought this pb have to be solved in 3.5.1 release
14:37 Thilam thought sorry
14:38 ndevos Thilam: right, enabling quota should create that directory, you can file a bug for that and add glusterfs-3.5.2 in the blocks field (need to enable 'advanced' mode for reporting)
14:38 glusterbot https://bugzilla.redhat.com/en​ter_bug.cgi?product=GlusterFS
14:39 Thilam k thx
14:40 Thilam on my way
14:49 Thilam sorry, I've enable the "advanced" report but I don't find the field to specify glusterfs-3.5.2
14:51 semiosis Thilam: 3.5.2 is not out yet.  use pre-release
14:57 kkeithley 3.5.2 is a choice for the Version:
14:58 Thilam yeah but it's not in the advanced fields
14:58 Thilam like ndevos asked me, I was searching for a target release field
14:58 Thilam nevermind
14:59 kkeithley after you create the BZ you can update and put 3.5.2 in the Target Release
14:59 semiosis ah so it is
15:00 ndevos Thilam: I meant the text-field called 'blocks', you can add "glusterfs-3.5.2" in there
15:00 JustinClift *** Gluster Community Meeting is NOW.  #gluster-meeting on irc.freenode.net ***
15:01 Thilam there is no text field called blocks :)
15:01 Thilam even if I am in advanced view
15:02 sauce my two glusterfs servers in AWS crashed at the same time :( this is the second time this has happened with VERY light usage.  i can't even SSH in
15:02 sauce AWS monitoring shows both had CPU go to 100% about 30 minutes ago
15:03 jdarcy joined #gluster
15:03 lmickh joined #gluster
15:05 andreask joined #gluster
15:07 anoopcs joined #gluster
15:09 ndevos Thilam: I have that field on the bottom of my enter_bug.cgi page...
15:09 ndevos Thilam: you can also try to set it by using https://bugzilla.redhat.com/enter_bug.cgi?pr​oduct=GlusterFS&amp;blocked=glusterfs-3.5.2
15:09 glusterbot Title: Log in to Red Hat Bugzilla (at bugzilla.redhat.com)
15:15 anoopcs left #gluster
15:16 Thilam I'll use your 2nd link, on the bottom of my enter_bug.cgi, the  last text field is Description :)
15:16 Thilam ho no, the last is Bug-ID
15:17 vpshastry joined #gluster
15:17 LebedevRI joined #gluster
15:18 gmcwhistler joined #gluster
15:19 Thilam hey, go the blocks field one I've submitted my bug :)
15:20 Thilam -one +once
15:22 P0w3r3d joined #gluster
15:23 bala joined #gluster
15:27 cjanbanan1 left #gluster
15:27 cjanbanan joined #gluster
15:28 marbu joined #gluster
15:32 bala1 joined #gluster
15:36 bala joined #gluster
15:40 Peter3 joined #gluster
15:44 LebedevRI joined #gluster
15:46 Eco_ joined #gluster
15:47 glusterbot New news from newglusterbugs: [Bug 1117888] Problem when enabling quota : Could not start quota auxiliary mount <https://bugzilla.redhat.co​m/show_bug.cgi?id=1117888> || [Bug 1117886] Gluster not resolving hosts with IPv6 only lookups <https://bugzilla.redhat.co​m/show_bug.cgi?id=1117886>
15:54 ron-slc joined #gluster
15:57 deeville joined #gluster
15:57 ndevos Thilam: nice, you seem to have added the blocker after filing the bug? glad to know that works at least :)
16:01 Thilam yes
16:03 Thilam strange, but it's ok now :)
16:05 doo joined #gluster
16:06 jcsp joined #gluster
16:12 MacWinner joined #gluster
16:17 glusterbot New news from newglusterbugs: [Bug 1117921] Wrong releaseversion picked up when doing 'make -C extras/LinuxRPM glusterrpms' <https://bugzilla.redhat.co​m/show_bug.cgi?id=1117921>
16:18 tg2 joined #gluster
16:20 MacWinner joined #gluster
16:21 MacWinner joined #gluster
16:29 Mo__ joined #gluster
16:31 davinder16 joined #gluster
16:37 elico joined #gluster
16:37 cjanbanan joined #gluster
16:52 coredump joined #gluster
17:01 pvh_sa joined #gluster
17:13 kanagaraj joined #gluster
17:14 vimal joined #gluster
17:15 rturk joined #gluster
17:17 glusterbot New news from newglusterbugs: [Bug 1117951] Use C-locale for numerics (strtod and friends) <https://bugzilla.redhat.co​m/show_bug.cgi?id=1117951>
17:29 plarsen joined #gluster
17:32 ricky-ti1 joined #gluster
17:36 cristov joined #gluster
17:37 qdk joined #gluster
17:38 anotheral is it normal for heals to be very very slow?  This one has only restored about 11GB of data after 15 hours
17:39 bfoster joined #gluster
17:42 jiffe98 so if I ls in a directory and get a whole bunch of IO errors and files with question marks, how does one go about fixing that?
17:43 JoeJulian_ jiffe98: Start with "gluster volume status"
17:44 jiffe98 JoeJulian_: everything looks good there, bricks and nfs servers all running
17:44 JoeJulian_ anotheral: You can always just walk the directory tree from a client mount. It shouldn't be that slow, though, unless you've got very busy servers.
17:44 JoeJulian_ jiffe98: check that directory on the bricks?
17:46 anotheral JoeJulian_: ah, that will force the heal to catch up?
17:47 JoeJulian_ anotheral: Anything that's needs healed /should/ be healed by the client that way. You can verify by looking in the client log.
17:47 anotheral this is a distributed replicated 2x2 cluster, and one of the bricks up and died
17:47 anotheral (four bricks, one per node)
17:47 JoeJulian_ s/node/server
17:48 anotheral ok
17:48 anotheral ah, it's already at 20GB now!
17:48 anotheral thanks, JoeJulian_
17:48 JoeJulian_ You're welcome
17:49 anotheral 25GB now - muuuuch better.
17:51 jiffe98 files are there but the dates on them are different
17:55 jiffe98 we rsync one of each mirror every night, sounds like they tried to restore someone by rsyncing in the other direction
17:57 deeville joined #gluster
18:04 jiffe98 apparently that process doesn't work, rsyncing both mirrors so they both match and I still get errors
18:05 Peter3 what do we do with failures in volume rebalance?
18:07 calum_ joined #gluster
18:08 LebedevRI joined #gluster
18:10 Eco_ joined #gluster
18:11 hchiramm_ joined #gluster
18:17 tg2 rm -rf *
18:21 andreask joined #gluster
18:28 purpleidea Peter3: ignore tg2 he's a bot
18:31 plarsen joined #gluster
18:31 Peter3 ok
18:31 semiosis purpleidea: oh no
18:31 Peter3 any idea what we should do with the failures on volume rebalance
18:31 Peter3 ?
18:32 purpleidea semiosis: what's up?
18:32 semiosis this is the second bot of yours i'm having to ignore ;)
18:35 pureflex joined #gluster
18:38 cjanbanan joined #gluster
18:49 qdk joined #gluster
18:53 purpleidea semiosis: lol no it's a real person that i know irl. but he's trolling! lol
18:53 purpleidea semiosis: i think it's safe to keep him /ignore-d lol
18:54 semiosis we'll see
18:55 * JoeJulian can always kickban him...
18:55 chirino joined #gluster
18:55 Peter3 you guys talking about me? ;(
18:55 JoeJulian Failures on a rebalance are usually: the target drive has less free space than the source drive
18:56 purpleidea Peter3: no, you're good
18:56 Peter3 :)
18:56 Peter3 thanks!
18:57 Peter3 JoeJulian: if the source drive has more space then target, then why would it try rebalance?
18:57 purpleidea Peter3: random unrelated, believe it or not, but I know another "peter3" :P
18:57 JoeJulian @lucky dht misses are expensive
18:57 glusterbot JoeJulian: http://joejulian.name/blog​/dht-misses-are-expensive/
18:57 JoeJulian Peter3: Read the first part of that to learn how dht works. ^
18:58 Peter3 thanks!
19:03 rotbeard joined #gluster
19:18 coredump joined #gluster
19:19 ekuric joined #gluster
19:21 cultavix joined #gluster
19:22 cultavix joined #gluster
19:22 cultavix hi all
19:25 semiosis hi
19:25 glusterbot semiosis: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
19:25 semiosis cultavix: ^^^
19:25 cultavix semiosis :)
19:26 cultavix I'm very happy with my gluster cluster so far, just got it setup at home now
19:26 cultavix now looking to expand it with no downtime
19:26 cultavix and take advantage of some of its features
19:26 cultavix like snapshotting
19:26 cultavix geo-replication, etc
19:27 cultavix does each node have to have the same size disk ?
19:27 semiosis well, no, but that's usually a good idea
19:27 cultavix Ive got 2x servers with 100GB drives each
19:28 cultavix could I add for example, 2x servers with 500GB drives now
19:28 cultavix and combine everything
19:28 cultavix not really sure what the best way is
19:28 semiosis glusterfs distributes *files* evenly among the bricks (not bytes) so you could run out of space on your smaller bricks before your larger ones
19:28 cultavix ah ok, that makes sense
19:28 semiosis you could split the 500GB drives into 5 x 100GB bricks (with LVM, or partitions)
19:28 cultavix so I've sort of limited myself to 100GB right now
19:29 cultavix ah ok, that sounds cool
19:29 semiosis replicate between the servers & distribute over the bricks
19:29 cultavix I could do like a 2 x 2 = 4 ?
19:29 cultavix and just get the right sizes via lvm
19:30 semiosis something like that... though if you have 5 x 100 on one pair of servers and 1 x 100 on another it would be 2 x 6
19:30 cultavix ah ok, so I would create a new brick for each 100GB partition/lvm
19:30 semiosis yep
19:30 cultavix and I can do that on the current 2x servers or shall I create a few more ?
19:30 cultavix (this is all for just testing btw, so its just for me to learn)
19:30 semiosis you can have many bricks per server
19:31 cultavix I'd like use as much of the disk as I can, with at least 1 replica
19:31 cultavix so if one fails, then it's no problem
19:31 Paul-C joined #gluster
19:33 Peter3 Why would this happen and do I need to worry about?
19:33 Peter3 http://pastie.org/9372368
19:33 cultavix ok, here is what im going to do, im going to create 2x new servers with 2x 100gb disks, id like to expand my diskspace by 100gb
19:33 glusterbot Title: #9372368 - Pastie (at pastie.org)
19:34 semiosis cultavix: well, there are some benefits to expanding the bricks in-place, with lvm, rather than adding more bricks
19:35 semiosis cultavix: if you add-brick then you need to rebalance, which is an expensive operation
19:35 semiosis cultavix: if you just expand existing bricks you can avoid that
19:35 cultavix but I'd like to add more complexity, at least 2 more servers
19:36 semiosis cultavix: to add servers you can either add-brick (req's rebalance) or move existing bricks to the new server
19:36 semiosis if you plan ahead with many bricks, that is
19:36 cultavix ah ok, I can move them onto new servers...
19:36 cultavix thats handy!
19:36 semiosis the replace-brick command
19:36 semiosis which only moves the data on that brick, not all data on all bricks like rebalance
19:37 cultavix ok cool, thanks buddy... I'll get cracking.... I may come back later on tonight or tomorrow, you guys have been very helpful
19:37 cultavix I've only got 1 brick
19:37 semiosis good luck
19:37 cultavix cheers :)
19:37 * semiosis biased against rebalance
19:38 cjanbanan joined #gluster
19:38 Peter3 semiosis: huh?
19:39 semiosis i dont trust it
19:40 Peter3 you mean you do not trust rebalance?
19:41 semiosis that is what i mean
19:41 Peter3 i am still getting missing space between df and du
19:41 Peter3 i wonder if rebalance would fix it
19:41 semiosis i dont trust du either
19:41 semiosis the list of things i do not trust is extensive
19:43 tdasilva joined #gluster
19:43 firemanxbr joined #gluster
19:49 Peter3 joined #gluster
19:53 KORG joined #gluster
20:02 stickyboy semiosis: :P
20:24 SpeeR joined #gluster
20:29 coredump|br joined #gluster
20:31 chirino joined #gluster
20:37 cjanbanan joined #gluster
20:43 coredump joined #gluster
20:55 coredump|br joined #gluster
21:10 ninkotech__ joined #gluster
21:17 hchiramm_ joined #gluster
21:21 pureflex joined #gluster
21:34 cultavix joined #gluster
21:42 dtrainor joined #gluster
22:00 cjanbanan joined #gluster
22:00 sjm left #gluster
22:11 sonicrose joined #gluster
22:12 sonicrose hi all.  is this a good place to hire a gluster expert/freelancer to take a look over my gluster setup?
22:13 JoeJulian JustinClift: ^
22:15 sonicrose i really need this storage to run at top notch, and being that I've set it all up by myself with nothing to go by, im 100% that if i can get a gluster expert(s) to review my setup we can make it 100% better than it is now
22:15 sonicrose its worth a few hundred dollars to me
22:15 klaas joined #gluster
22:16 JoeJulian I would imagine a few thousand dollars at the least, but that's just my expectation.
22:18 gmcwhistler joined #gluster
22:19 coredump joined #gluster
22:24 sonicrose thousands huh? dang
22:38 cjanbanan joined #gluster
22:51 cultavix joined #gluster
22:53 chirino joined #gluster
22:53 fidevo joined #gluster
22:58 Eco_ if anyone wants to take a gander, just rolled out the new gluster site at www.gluster.org
22:59 JoeJulian prdy
22:59 Eco_ the old site still exists so if it makes folks sad we can always roll back  ;)
22:59 JoeJulian Can I try to break it?!?!
22:59 Eco_ please do
22:59 Eco_ will write a proper blog post about the changes but wanted to get it out first
23:00 JoeJulian I'll spin up a few thousand VM's and put it through some real testing!
23:00 JoeJulian ... actually, I won't. I have to figure this other thing out first.
23:03 Eco_ JoeJulian, lol
23:04 Eco_ its middleman based now so it should at least take longer to fall over than the previous site
23:22 pureflex joined #gluster
23:34 necrogami joined #gluster
23:35 gildub joined #gluster
23:44 gmcwhistler joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary