Camelia, the Perl 6 bug

IRC log for #gluster, 2013-09-18

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:04 glusterbot New news from resolvedglusterbugs: [Bug 764966] gerrit integration fixes <http://goo.gl/AZDsh>
00:05 msvbhat joined #gluster
00:09 Guest33406 joined #gluster
00:12 Guest33406 Looking for some assistance with setting up a simple test gluster 3.4.0 on fedora 19. I have the software installed, disabled selinux, have 2 nodes in peer status but volume create fails with no error detail, and nothing useful in cli log
00:25 \_pol joined #gluster
00:44 vpshastry joined #gluster
00:46 JoeJulian :O
00:48 MugginsM hello!
00:48 glusterbot MugginsM: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
00:49 avati :O
00:49 MugginsM looks like we desperately need readdir-ahead :)
00:49 avati hell!o
00:50 avati MugginsM, it is merged in master, will be in 3.5
00:50 * MugginsM nods
00:51 MugginsM we've got GDAL (www.gdal.org) doing directory walks all over the place
00:51 avati MugginsM, did you test it?
00:51 MugginsM not yet
00:51 MugginsM currently trying to modify GDAL a bit
00:51 MugginsM to make it Not Do That
00:51 avati you might want to test a combination of readdirplus and readdir-ahead.. the combination works great
00:54 MugginsM profile shows 10x as many READDIR as anything else
00:54 vpshastry1 joined #gluster
00:54 avati hmm, are you using samba?
00:54 MugginsM and next is LOOKUP
00:54 MugginsM everything else is well behind
00:55 MugginsM no, fuse glusterfs
00:55 avati samba on fuse can generate a LOT of readdirs
00:55 avati if one of your clients is re-exporting the mount through samba, for example
00:56 MugginsM small number of clients (<10) doing lots of image/geodata processing on small files
00:56 MugginsM what's the downside of turning on lookup-unhashed?    we have 2 servers, 6 replicated bricks each
00:56 avati is it written in shell script?
00:57 MugginsM mostly C, I think
00:57 avati bash expansion of '*' wildcards, and even makefiles generate lots of readdirs
00:57 MugginsM our dev has been stracing and reckons the GDAL library is doing lots of readdirs
00:59 avati http://review.gluster.org/5770 + open-behind can improve small file performance significantly too
00:59 glusterbot Title: Gerrit Code Review (at review.gluster.org)
00:59 MugginsM got 100 readdir to 10 lookup to 1 read
00:59 MugginsM oh that looks interesting
00:59 vpshastry joined #gluster
01:00 avati MugginsM, hmm, is that library trying to resolve case insensitivity?
01:00 avati (of filenames)
01:03 MugginsM who knows why it's doing  what it's trying to do :-/
01:04 MugginsM is there any risk to turning off lookup-unhashed?
01:04 MugginsM our dev reckons he's fixing the readdir problem, just looking at the number of lookups now
01:05 avati don't think lookup-unhashed is going to make a difference here
01:05 avati if all lookups happen on existing files, it wont help
01:06 avati if there are lots of file creates (where an initial lookup is done to verify file does *NOT* exist), then lookup-unhashed helps
01:07 MugginsM this is what we see on one of the bricks:  https://gist.github.com/colin​coghill/8692be977a52794812be
01:07 glusterbot <http://goo.gl/koUUg3> (at gist.github.com)
01:08 MugginsM and it's not fast :-/
01:09 MugginsM ooo readdirp is different from readdir
01:10 MugginsM I did not know that
01:29 jag3773 joined #gluster
01:43 harish joined #gluster
02:11 zapotah joined #gluster
02:38 badone_ joined #gluster
02:43 asias joined #gluster
02:50 vshankar joined #gluster
02:52 zapotah joined #gluster
03:01 kshlm joined #gluster
03:01 vpshastry joined #gluster
03:09 bharata-rao joined #gluster
03:11 msciciel1 joined #gluster
03:13 Gugge_ joined #gluster
03:13 GLHMarmot joined #gluster
03:13 chirino joined #gluster
03:13 hchiramm_ joined #gluster
03:13 jbrooks_ joined #gluster
03:14 basic- joined #gluster
03:14 looped joined #gluster
03:14 tg2 joined #gluster
03:18 atrius` joined #gluster
03:18 Elendrys joined #gluster
03:19 kkeithley joined #gluster
03:19 kkeithley joined #gluster
03:19 basic` joined #gluster
03:20 shubhendu joined #gluster
03:20 portante joined #gluster
03:26 l4v joined #gluster
03:26 risibusy joined #gluster
03:26 poptix joined #gluster
03:33 \_pol joined #gluster
03:34 sgowda joined #gluster
03:35 andrewklau joined #gluster
03:36 andrewklau Hi, I'm having an issue where my bricks aren't showing as online
03:36 andrewklau gluster volume status
03:36 andrewklau but the services are running
03:45 kkeithley joined #gluster
03:49 andrewklau left #gluster
03:50 andrewklau1 joined #gluster
03:51 itisravi joined #gluster
03:59 MugginsM bleh, 3.5  readdir-ahead doesn't build against 3.4 :)
03:59 MugginsM been a while since I've done much C, lessee
04:03 rjoseph joined #gluster
04:09 AndroUser2 joined #gluster
04:15 ppai joined #gluster
04:15 bulde joined #gluster
04:17 bfoster joined #gluster
04:27 MugginsM ok, no chance anyone has a readdir-ahead against 3.4.0/1?  :)
04:35 zerick joined #gluster
04:37 JoeJulian MugginsM: Doesn't look like there's a backport of it. :(
04:40 ndarshan joined #gluster
04:41 MugginsM I'll see if I can shoehorn it in, I didn't try very hard
04:43 vpshastry joined #gluster
04:44 \_pol joined #gluster
04:46 spandit joined #gluster
04:46 MugginsM ok, with making GDAL do a lot less readdirs, we've gone from 2IOPS on the server to about 250IOPS :)
04:48 Shdwdrgn joined #gluster
04:48 JoeJulian Sounds much better.
04:50 hchiramm_ joined #gluster
04:51 MugginsM literally 100x faster
04:51 MugginsM for our "real" workload
04:55 bala joined #gluster
04:57 MugginsM (really real, I just changed the production site :) )
04:57 hagarth joined #gluster
04:59 JoeJulian yuck... I just looked at what it would take to merge that patch into release-3.4 and it's not going to be me backporting it, I can tell you that for certain.
05:07 CheRi joined #gluster
05:10 dusmant joined #gluster
05:10 ndarshan joined #gluster
05:11 MugginsM 'k
05:15 glusterbot New news from newglusterbugs: [Bug 1009210] Incorrect NFS ACL encoding causes "system.posix_acl_default" setxattr failure on bricks <http://goo.gl/zswiSg> || [Bug 1009223] NFS daemon is limiting IOs to 64KB <http://goo.gl/aX4G8s>
05:18 kevein joined #gluster
05:18 shruti joined #gluster
05:18 ajha joined #gluster
05:22 shylesh joined #gluster
05:23 lalatenduM joined #gluster
05:24 l4v joined #gluster
05:25 lalatenduM joined #gluster
05:29 rastar joined #gluster
05:33 syoyo__ joined #gluster
05:33 raghu joined #gluster
05:36 ndarshan joined #gluster
05:36 bulde joined #gluster
05:37 nshaikh joined #gluster
05:38 bala1 joined #gluster
05:45 TomKa joined #gluster
05:45 aravindavk joined #gluster
05:51 rgustafs joined #gluster
05:52 dusmant joined #gluster
06:13 tziOm joined #gluster
06:16 zapotah joined #gluster
06:17 bulde joined #gluster
06:18 ProT-0-TypE joined #gluster
06:21 jtux joined #gluster
06:22 ndarshan joined #gluster
06:24 kanagaraj joined #gluster
06:26 dusmant joined #gluster
06:35 psharma joined #gluster
06:48 davinder joined #gluster
06:49 mohankumar joined #gluster
06:52 shubhendu joined #gluster
06:56 ProT-0-TypE joined #gluster
06:57 ricky-ticky joined #gluster
07:01 dusmant joined #gluster
07:02 ngoswami joined #gluster
07:09 eseyman joined #gluster
07:10 ctria joined #gluster
07:14 xavih joined #gluster
07:16 harish joined #gluster
07:22 ekuric joined #gluster
07:23 a2 joined #gluster
07:30 ProT-0-TypE joined #gluster
07:31 lkoranda joined #gluster
07:36 aib_007 joined #gluster
07:37 andreask joined #gluster
07:42 uebera|| joined #gluster
07:46 shubhendu joined #gluster
08:01 johan___1 joined #gluster
08:01 spandit joined #gluster
08:04 lkoranda joined #gluster
08:16 abyss^ There would be any problem when I set up glusterFS server and client on the same node?
08:16 andreask no, works fine
08:17 andreask ... as long as you use native gluster mount
08:19 abyss^ OK. Thank you
08:21 bulde1 joined #gluster
08:22 shylesh joined #gluster
08:33 kaushal_ joined #gluster
08:34 l4v joined #gluster
08:36 X3NQ joined #gluster
08:38 satheesh joined #gluster
08:42 X3NQ joined #gluster
08:45 yinyin joined #gluster
08:52 dusmant joined #gluster
09:02 mgebbe joined #gluster
09:04 vimal joined #gluster
09:15 monotek joined #gluster
09:15 glusterbot New news from newglusterbugs: [Bug 1002940] change in changelog-encoding <http://goo.gl/dmQAcW>
09:16 monotek hi :-)
09:16 bulde joined #gluster
09:17 spandit joined #gluster
09:19 monotek i just started using glusterfs 3.4 in ubuntu 12.04.
09:19 monotek i started with 2 nodes replicated and all was working fine.
09:19 monotek now i added 2 more nodes and after the rebalance my vm images didnt work anymore (read only fs).
09:19 monotek it seems the images was moved to the new nodes. the /data dirs on the old nodes are empty now.
09:19 monotek how can i fix this?
09:23 andreask no matter where the files are physically located, if you access them through the gluster-mount all should be fine
09:24 monotek ok, i think ive found the reason for the RO fs... i have a lot of:
09:24 monotek /var/log/glusterfs/bricks/data.log:[2013-09-18 08:12:13.041736] E [posix.c:2135:posix_writev] 0-gv0-posix: write failed: offset 79229554688, Bad file descriptor
09:24 monotek in one of the new nodes...
09:25 monotek will it help to remove the node again?
09:29 monotek hmmm... the undelying hdds loks fine.... the other node says:
09:29 monotek /var/log/glusterfs/etc-glusterfs​-glusterd.vol.log.1:[2013-09-17 16:26:12.875189] E [glusterd-store.c:1378:glusterd_retrieve_uuid] 0-: Unable to get store handle!
09:29 monotek /var/log/glusterfs/etc-glusterfs​-glusterd.vol.log.1:[2013-09-17 16:26:13.300858] E [glusterd-utils.c:4081:glusterd_brick_start] 0-management: Could not find peer on which brick storage2.serverhost:/data resides
09:29 monotek /var/log/glusterfs/etc-glusterfs​-glusterd.vol.log.1:[2013-09-17 16:26:32.271381] E [glusterd-utils.c:3627:gluster​d_nodesvc_unlink_socket_file] 0-management: Failed to remove /var/run/cb98281b80bd47a5aa6162af13efa5d9.socket error: Resource temporarily unavailable
09:29 monotek /var/log/glusterfs/etc-glusterfs​-glusterd.vol.log.1:[2013-09-17 16:26:33.285782] E [glusterd-utils.c:3627:gluster​d_nodesvc_unlink_socket_file] 0-management: Failed to remove /var/run/c5ba2f50d11f91b542e639fd86414149.socket error: No such file or directory
09:29 monotek /var/log/glusterfs/etc-glusterfs​-glusterd.vol.log.1:[2013-09-17 16:26:54.524528] E [glusterd-rebalance.c:729:glust​erd_defrag_event_notify_handle] 0-: Failed to update status
09:29 monotek /var/log/glusterfs/glustershd.log:[2013-09-17 16:26:14.669501] E [afr-self-heald.c:1067:afr_find_child_position] 0-gv0-replicate-0: getxattr failed on gv0-client-0 - (Transport endpoint is not connected)
09:29 monotek /var/log/glusterfs/glustershd.log:[2013-09-17 16:26:33.312056] E [afr-self-heald.c:1067:afr_find_child_position] 0-gv0-replicate-1: getxattr failed on gv0-client-2 - (Transport endpoint is not connected)
09:29 monotek /var/log/glusterfs/glustershd.log:[2013-09-17 16:26:33.315047] E [afr-self-heald.c:1067:afr_find_child_position] 0-gv0-replicate-0: getxattr failed on gv0-client-1 - (Transport endpoint is not connected)
09:29 monotek connection problem?
09:30 andreask name resolution works?
09:33 davinder2 joined #gluster
09:34 ndarshan joined #gluster
09:34 harish joined #gluster
09:35 dusmant joined #gluster
09:37 monotek yes
09:45 shubhendu joined #gluster
09:46 monotek hmmm... seems the rebalance is still in progress?
09:46 monotek root@kvm3:/# gluster volume status
09:46 monotek Status of volume: gv0
09:46 monotek Gluster processPortOnlinePid
09:46 monotek ---------------------------------------​---------------------------------------
09:46 monotek Brick storage1.serverhost:/data49152Y5714
09:46 monotek Brick storage2.serverhost:/data49153Y18322
09:46 monotek Brick kvm1.serverhost:/data49152Y6078
09:46 monotek Brick kvm3.serverhost:/data49152Y5319
09:46 monotek NFS Server on localhost2049Y5329
09:46 monotek Self-heal Daemon on localhostN/AY5336
09:46 monotek NFS Server on kvm1.serverhost2049Y6088
09:46 monotek Self-heal Daemon on kvm1.serverhostN/AY6095
09:46 monotek NFS Server on storage2.serverhost2049Y29060
09:50 spandit joined #gluster
09:50 rjoseph joined #gluster
09:56 ndarshan joined #gluster
10:04 sgowda joined #gluster
10:13 Rydekull O_O
10:13 Rydekull monotek: please, stop pasting so much, use pastebin or similiar services
10:15 ricky-ticky joined #gluster
10:19 manik joined #gluster
10:20 shylesh joined #gluster
10:20 edward1 joined #gluster
10:26 ricky-ticky joined #gluster
10:30 ricky-ticky joined #gluster
10:31 monotek sorry...
10:32 ndarshan joined #gluster
10:35 ricky-ticky joined #gluster
10:36 kshlm joined #gluster
10:41 vpshastry1 joined #gluster
10:43 ricky-ticky joined #gluster
10:46 glusterbot New news from newglusterbugs: [Bug 990028] enable gfid to path conversion <http://goo.gl/1HwiQc> || [Bug 969461] RFE: Quota fixes <http://goo.gl/XFSM4>
10:50 spandit joined #gluster
10:53 ricky-ticky joined #gluster
10:54 sgowda joined #gluster
10:55 jtux joined #gluster
11:00 ccha joined #gluster
11:02 dusmant joined #gluster
11:04 andreask joined #gluster
11:16 harish_ joined #gluster
11:17 CheRi joined #gluster
11:17 hagarth joined #gluster
11:22 rjoseph joined #gluster
11:23 ppai joined #gluster
11:23 dusmant joined #gluster
11:35 vpshastry joined #gluster
11:38 CheRi joined #gluster
11:42 bala joined #gluster
11:43 spandit joined #gluster
11:47 B21956 joined #gluster
11:58 mohankumar joined #gluster
12:01 ndarshan joined #gluster
12:08 chirino joined #gluster
12:10 itisravi_ joined #gluster
12:11 bennyturns joined #gluster
12:13 hagarth joined #gluster
12:21 harish_ joined #gluster
12:29 vpshastry joined #gluster
12:35 manik joined #gluster
12:57 harish_ joined #gluster
12:58 lalatenduM joined #gluster
12:58 anands joined #gluster
13:02 mohankumar joined #gluster
13:13 semiosis :O
13:14 Elendrys hi guys
13:15 Elendrys Does anyone know if i can recover from a "peer rejected" state without manually change the state value in my peers/id files ?
13:16 Elendrys Gluster 3.3.0
13:17 anands joined #gluster
13:17 jcsp joined #gluster
13:40 crashmag joined #gluster
13:44 anands1 joined #gluster
13:50 anands joined #gluster
13:51 rcheleguini joined #gluster
13:57 bugs_ joined #gluster
14:02 ndk joined #gluster
14:14 wushudoin joined #gluster
14:21 kaptk2 joined #gluster
14:33 zapotah joined #gluster
14:40 failshell joined #gluster
14:54 Dave_H Hi everyone. I am looking for some assistance with setting up a simple test gluster 3.4.0 on fedora 19. I have the software installed, disabled selinux, have 2 nodes in peer status but volume create fails with no error detail, and nothing useful in cli log. Any suggestions as where else to look? Thanks
14:59 sprachgenerator joined #gluster
15:00 jag3773 joined #gluster
15:01 lalatenduM joined #gluster
15:01 ndk joined #gluster
15:03 bulde joined #gluster
15:05 JoeJulian /var/log/glusterfs/etc-glusterfs-glusterd.vol.log
15:05 JoeJulian Dave_H: ^
15:47 JoeJulian Elendrys: I've had limited luck with stopping all glusterd and starting them again. 3.3.0 has some very nasty bugs wrt server-to-server communication. Nothing that causes data loss, but potential deadlocks and such. You really should upgrade.
15:47 andreask joined #gluster
15:50 vpshastry joined #gluster
16:16 hvera1981 joined #gluster
16:20 LoudNoises joined #gluster
16:20 hvera1981 Greetings folks, I have a mission to synchronize 2 file servers over an ADSL (10 Mbit) link, I would like to know from you if gluster is suitable for this kind of operation or should I move to something like Unison
16:21 hagarth joined #gluster
16:21 hvera1981 I am pretty worried about using fuse in such an unstable network profile
16:24 neofob if i understand it right, the directory tree structure is the same among distributed bricks
16:24 neofob would that be a "hidden overhead"?
16:24 kkeithley_ Gluster is not, per se, a tool for keeping file servers in sync.  If you use gluster then you would use geo-replication to sync your primary storage to the server on the other side of the ADSL line.
16:28 hvera1981 yes, the structure will be same.
16:30 Elendrys joined #gluster
16:30 hvera1981 I was studying geo replication, http://gluster.org/community/documenta​tion/index.php/Gluster_3.2:_Exploring_​Geo-replication_Deployment_Scenarios  , as described I could not see how to handle "write" operations in the  slave
16:30 glusterbot <http://goo.gl/OsP3q> (at gluster.org)
16:30 hvera1981 it looks like it is unilateral
16:30 neofob thanks for the info
16:32 kkeithley_ AFR (replication) is synchronous. Clients write to both/all servers. Your ADSL line is not suitable for AFR, there's too much latency. geo-rep is asynchronous, which is what I suppose you mean by unilateral.
16:35 RedShift joined #gluster
16:38 DataBeaver joined #gluster
16:40 Mo__ joined #gluster
16:41 hvera1981 Sorry about the conflict . By unilateral I mean that slaves were unable to perform WRITE operations in the file system. I will study geo replication as you suggested
16:45 bulde joined #gluster
16:46 duerF joined #gluster
16:49 shylesh joined #gluster
16:49 FooBar_ joined #gluster
16:49 vpshastry left #gluster
16:50 FooBar_ Is it possible to remove a brick from a volume, without losing the data, so by telling gluster to re-locate all the files on that specific brick (of group of bricks) to the remaining bricks, and then removing / deleting the bricks ?
16:51 \_pol joined #gluster
16:51 FooBar joined #gluster
16:51 shylesh FooBar_: gluster volume remove-brick <volname> <brick> start
16:51 elyograg FooBar_: yes.  you do a "remove-brick start" command, then repeatedly check the status until it says it's done, and then I think it's a remove-brick commit.
16:51 FooBar_ i'll try
16:52 elyograg if your volume is at least half-full, you'll run into a problem.
16:52 FooBar_ i'm still testing, < 100G data in a 4T volume
16:52 \_pol joined #gluster
16:52 elyograg bug 862437
16:52 glusterbot Bug http://goo.gl/0UN3li unspecified, medium, rc, rcritten, CLOSED DUPLICATE, Cert install errors
16:52 elyograg duplicate?!
16:53 elyograg oh, I did the wrong number.
16:53 elyograg bug 862347
16:53 glusterbot Bug http://goo.gl/QjhdI medium, medium, ---, sgowda, ASSIGNED , Migration with "remove-brick start" fails if bricks are more than half full
16:53 elyograg glusterbot: thanks
16:53 glusterbot elyograg: you're welcome
16:56 jclift joined #gluster
16:56 FooBar In this case it's a replicated volume (8 disks, 2 replicas)
16:57 elyograg if you're not removing required storage for some of the files, just a replica, you should be able to just remove the brick, no need to start it and watch the status.
16:57 FooBar seeing if I can remove 1 set of replica's (so go to 6 disks)
16:57 zaitcev joined #gluster
16:57 FooBar from AB, CD, EF, GH to AB, CD, EF
16:58 elyograg if you've got 8 disks and replica count is 2, then it's not just replicated, it's also either striped or distributed (possibly both).
16:58 FooBar reducing the size of the volume
16:58 FooBar yup..distributed replicated
16:58 elyograg ok, so you will need to do the start / repeated status / commit thing.
16:59 elyograg if you're not at least half full then it should proceed without problems.  with it at least half full, it will run out of space before it's done and you'll need to re-do the start multiple times to get it to finish without error.
16:59 aliguori joined #gluster
17:00 FooBar hmm... not seeing anything happening
17:00 FooBar gluster> volume remove-brick gv0 gluster2:/export/sdb/brick start
17:00 FooBar gluster> volume remove-brick gv0 gluster2:/export/sdb/brick status
17:00 FooBar Node Rebalanced-files          size       scanned      failures         status run-time in secs
17:00 FooBar ---------      -----------   -----------   -----------   -----------   ------------   --------------
17:00 FooBar localhost                0        0Bytes             0             0      completed             5.00
17:00 FooBar localhost                0        0Bytes             0             0      completed             5.00
17:00 FooBar localhost                0        0Bytes             0             0      completed             5.00
17:01 FooBar gluster3                0        0Bytes             0             0      completed             4.00
17:01 semiosis FooBar: use pastie/gist for multiline pastes please
17:02 FooBar ok
17:03 shylesh FooBar: can you check rebalance logs on gluster2 node
17:04 FooBar [2013-09-18 15:56:43.256522] W [glusterfsd.c:1002:cleanup_and_exit] (-->/lib64/libc.so.6(clone+0x6d) [0x349b4e894d] (-->/lib64/libpthread.so.0() [0x349b807851] (-->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xcd) [0x40533d]))) 0-: received signum (15), shutting down
17:04 FooBar (that's the last line, a few hours ago)
17:05 shylesh FooBar: looks like nothing to migrate
17:05 shylesh FooBar: do u have data on that brick
17:05 elyograg if you've only removed one replica, it probably knows that it doesn't need to actually move the data.  I've never tried to remove only one brick, though.
17:05 FooBar 56Gigs
17:05 FooBar i've tried removing the other brick that replicated this one also... no difference
17:05 elyograg You'd want to remove all bricks from a replica set.
17:06 FooBar yup, to shrink the volume
17:06 FooBar but keep 2 replica's on all data
17:07 premera_j_n_h joined #gluster
17:09 FooBar afk for lunch brb
17:13 XpineX_ joined #gluster
17:19 gmcwhistler joined #gluster
17:20 JoeJulian @ports
17:20 glusterbot JoeJulian: glusterd's management port is 24007/tcp and 24008/tcp if you use rdma. Bricks (glusterfsd) use 24009 & up for <3.4 and 49152 & up for 3.4. (Deleted volumes do not reset this counter.) Additionally it will listen on 38465-38467/tcp for nfs, also 38468 for NLM since 3.3.0. NFS also depends on rpcbind/portmap on port 111 and 2049 since 3.4.
17:21 davinder joined #gluster
18:06 vimal joined #gluster
18:08 edoceo I asked a while ago about the need for RPC (rpcbind) when using Gluster - my connection dropped so I did not see if any answer
18:37 Chocobo joined #gluster
18:39 Chocobo Hi all, when creating a gluster cluster (replica 2 for example) is it recommended to use JBODs or raid arrays for each node?  A raid array seems redundant but it could keep a node up in the event of a disk failure.
18:39 lpabon joined #gluster
18:41 elyograg I pondered doing JBODs.  As I contemplated the amount of work required to fix things in the event of a disk failure, I concluded that I would create a couple of RAID5 arrays per server.  Those are further broken down into 5TB volumes with LVM.
18:43 Chocobo elyograg: what is the purpose of breaking them into 5TB volumes?
18:45 elyograg the server has 12 drive bays.  I've got a two 6-drive RAID5 arrays.  The 5TB volumes let us use the same brick size regardless of what size drive goes in the bays.  With 4TB drives, each RAID array is 20TB usable, so it yields four 5TB volumes.  If we put 1TB drives in, then it gives us only one 5TB volume ... but it's the same brick size no matter what.
18:47 Chocobo elyograg: Oh, that is smart!
18:49 bennyturns joined #gluster
18:51 Chocobo elyograg: So do you still replicate or just distribute?
18:51 elyograg both.
18:59 Bjorklund joined #gluster
19:09 B21956 joined #gluster
19:10 B21956 left #gluster
19:35 P0w3r3d joined #gluster
19:45 neofob left #gluster
20:07 sprachgenerator joined #gluster
20:12 JoeJulian file a bug
20:12 glusterbot http://goo.gl/UUuCq
20:17 JoeJulian bug 835494
20:17 glusterbot Bug http://goo.gl/zqzlN medium, medium, ---, kdhananj, ASSIGNED , Volume creation fails and gives error "<brickname> or a prefix of it is already part of a volume", eventhough that brick is not part of any volume.
20:45 SpeeR joined #gluster
20:51 SpeeR does anyone have a recommendation for a sas RAID card between the areca vs 3ware?
21:10 elyograg SpeeR: coming up with a recommendation would require googling.  I'd really hate to deprive you of that pleasure. :)
21:11 SpeeR haha I've been googling, I've put together the order with the Areca 28 port card, but after reading some sites, was having 2nd thoughts
21:12 SpeeR I'm sure the areca will work fine
21:33 ctria joined #gluster
21:34 bennyturns joined #gluster
21:35 andreask joined #gluster
21:35 l4v joined #gluster
21:45 MugginsM joined #gluster
21:48 andreask joined #gluster
22:01 uebera|| joined #gluster
22:02 StarBeast joined #gluster
22:14 fidevo joined #gluster
22:15 JoeJulian It makes sense to ask storage experts for their opinions. :)
22:15 JoeJulian I have none though.
22:17 zaitcev joined #gluster
22:18 Remco All I can think of is make sure the card you pick is not end-of-life soon, since hardware raid cards can be very picky
22:19 Remco That kind of storage is out of my league
22:21 MugginsM any guides to tuning the number of threads on a gluster server?
22:23 MugginsM or otherwise tuning
22:23 MugginsM we're totally bottlenecked on CPU at the moment
22:24 MugginsM drives are having a snooze
22:42 jcsp joined #gluster
22:55 nueces joined #gluster
22:58 paratai_ joined #gluster
23:06 jclift left #gluster
23:12 StarBeast joined #gluster
23:35 micu1 joined #gluster
23:52 basic` how can i verify i am using readdirp properly with glusterfs.fuse?
23:52 basic` output of mount doesn't tell me it's being used… fs1:/web-drupal on /data/drupalsites type fuse.glusterfs (rw,default_permissions,al​low_other,max_read=131072)

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary