Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-07-20

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:05 Pupeno joined #gluster
01:03 diegows joined #gluster
01:11 cjanbanan joined #gluster
01:26 cjanbanan joined #gluster
01:47 cjanbanan joined #gluster
01:54 gildub joined #gluster
02:03 B21956 joined #gluster
02:19 Paul-C joined #gluster
03:11 Paul-C joined #gluster
03:42 japuzzo joined #gluster
03:47 cjanbanan joined #gluster
03:50 gildub joined #gluster
04:11 gildub joined #gluster
04:32 plarsen joined #gluster
04:45 marcoceppi joined #gluster
04:48 mAd-1 joined #gluster
04:56 Paul-C joined #gluster
05:26 MacWinner joined #gluster
05:58 Humble joined #gluster
06:00 anoopcs joined #gluster
06:05 Paul-C joined #gluster
06:20 LebedevRI joined #gluster
06:49 ekuric joined #gluster
06:51 ctria joined #gluster
07:33 andreask joined #gluster
07:52 vu joined #gluster
08:20 vu joined #gluster
08:22 qdk joined #gluster
08:38 cultavix joined #gluster
09:11 XpineX_ joined #gluster
09:14 vu joined #gluster
09:33 Pupeno joined #gluster
09:33 cultavix joined #gluster
09:35 vu joined #gluster
10:36 vu joined #gluster
10:43 gildub joined #gluster
10:57 bala joined #gluster
11:33 qdk joined #gluster
12:02 n0de joined #gluster
12:12 nishanth joined #gluster
12:18 cjanbanan joined #gluster
13:08 n0de joined #gluster
13:20 diegows joined #gluster
13:22 elico joined #gluster
13:24 sputnik1_ joined #gluster
13:25 firemanxbr joined #gluster
13:28 firemanxbr joined #gluster
13:35 _Bryan_ joined #gluster
13:39 bala joined #gluster
13:47 vu joined #gluster
14:06 sahina joined #gluster
14:15 cristov joined #gluster
14:26 bala joined #gluster
14:32 MacWinner joined #gluster
14:43 cjanbanan joined #gluster
14:44 recidive joined #gluster
15:11 fubada joined #gluster
15:45 and` joined #gluster
15:56 sonicrose joined #gluster
15:59 sonicrose hi all, anyone consider themselves an expert on the eager lock feature?  I'm wondering if it would be recommended or not for use with a gluster volume hosting live virtual machine disk images.  I am trying to troubleshoot occasional VM freezes that are resolved by doing service glusterd restart on my NFS servers.  Typically seems to happen when the backup jobs want to read from the same file that a VM is currently using.  Either
16:00 sonicrose perhaps making the backup job mount as read-only would solve that, or are locks still used on read-only mounts
16:01 sonicrose perhaps also i could make the backup skip the currently attached vhds and just backup the snapshots
16:02 sonicrose i currently have cluster.eager-lock enabled... i'm here to ask if disabling it is safe
16:02 sonicrose considering live running VMs are accessing the volume
16:03 sonicrose i'll idle here all day, so if you know please contact me thanks!
16:04 bala joined #gluster
16:10 ackjewt joined #gluster
16:10 sonicrose btw: huge ty to everyone whos put work into gluster, my xen vms love their new home on 10GbE gluster volumes and  485 MB/s
16:11 sonicrose finally able to have a 3 server xenserver pool with shared nothing
16:18 sonicrose is "eager-lock enable" equivilent to "eager-lock on" ?
16:25 vu joined #gluster
16:27 stickyboy Hehe, just did a rolling upgrade and now I've got 2 split-brain files... hopefully that's all. :)
16:28 stickyboy 3.5.0 -> 3.5.1.
16:28 stickyboy I didn't see any write workloads running on the storage so I figured "why not?"... ;)
16:32 hchiramm_ joined #gluster
16:32 social joined #gluster
16:48 cjanbanan joined #gluster
16:53 bala1 joined #gluster
16:58 bala joined #gluster
17:07 bidgy joined #gluster
17:07 bidgy left #gluster
17:12 plarsen joined #gluster
17:14 Pupeno joined #gluster
17:22 rotbeard joined #gluster
17:30 n0de joined #gluster
17:48 calum_ joined #gluster
18:11 Nowaker joined #gluster
18:31 recidive joined #gluster
18:40 MacWinner joined #gluster
18:50 mortuar joined #gluster
18:56 ekuric joined #gluster
18:57 Pupeno joined #gluster
18:57 hchiramm_ joined #gluster
19:22 Pupeno joined #gluster
19:30 stickyboy I've got this weird directory on my fuse mount of a replica volume... and I can't delete it.
19:30 stickyboy Looking at it in the backend brick, I'm not actually sure what's going on.  It's got some symlinks...
19:31 mjrosenb i dont rememver gluster using symlinks for anything.
19:33 stickyboy mjrosenb: Hmm, it the file itself is a symlink then it will.
19:33 stickyboy data/.glusterfs/be/96/be96f7d8-d013-4c07-a4fe-ffbb33606b46 -> ../../5a/38/5a38cbf0-b8eb-477e-9e7b-b658a66b6764/sdb1
19:33 stickyboy So... this entry points to another folder inside another entry...
19:52 dcope joined #gluster
19:54 dcope anyone here using gluster in productin?
20:02 gehaxelt How much RAM does gluster need?
20:02 gehaxelt *to run as a client
20:02 gehaxelt Is 128mb enough?
20:24 vu joined #gluster
20:36 stickyboy I'm not even sure if this is a split brain.
20:36 stickyboy I don't know what the hell it is.
20:43 recidive joined #gluster
20:44 hagarth joined #gluster
20:57 sonicrose dcope, I do
21:01 qdk joined #gluster
21:06 Nopik left #gluster
21:08 cjanbanan joined #gluster
21:17 dcope sonicrose: any major problems?
21:17 stickyboy dcope: Make sure you have RAID6.
21:17 stickyboy I lost two drives in my RAID5 last month, and GlusterFS replica saved my bacon, but I'm still chasing a few files which didn't heal properly.
21:18 dcope mmm dunno if i can do a raid 6
21:18 dcope the host i generally use just offers 5 and 0 iirc
21:19 dcope stickyboy: why would you need a raid though? i thought glusterfs would essentially do a raid 1 across your pool?
21:19 stickyboy dcope: Sure. :)
21:20 stickyboy Red Hat Storage guide recommends RAID6 with battery-backed RAID controllers.
21:20 stickyboy If you wanna go "GlusterFS replication is application-layer RAID1" ... then be my guest. ;)
21:20 dcope -_-
21:21 stickyboy dcope: Just sayin. :)
21:21 dcope my host doens't offer raid 6 controllers
21:21 dcope and i have all the files mirrored on s3
21:21 stickyboy dcope: Well you'll probably be fine.
21:21 dcope so im not too worried about losing any
21:21 dcope because if i do lose files in the current setup, they get pulled from s3
21:21 stickyboy We're doing lotttttts of data... so we don't do S3.
21:21 dcope stickyboy: well here's my use case... perhaps you can tell me if GlusterFS would be suitable?
21:22 dcope i have a pretty beefy server that has a bunch of small (5 - 15mb) files on them that gets served up via nginx. right now im hitting different performance walls which mainly are disk io
21:22 stickyboy dcope: Read or write?
21:22 dcope i have been reading about glusterfs and it sounds like i could setup 2 or 3 servers to mirror the files and balance the load
21:22 dcope stickyboy: read
21:22 dcope very little writes..
21:24 stickyboy dcope: Yeah that will be a good workload.
21:24 dcope cool
21:24 stickyboy We do Bioinformatics... lots of writing and interactive use.
21:25 dcope stickyboy: is it pretty easy to migrate files to a glusterfs share?
21:25 stickyboy 30TB of data, and growing at 1TB a week at this rate.
21:25 dcope wow
21:25 stickyboy dcope: Yah, it's just XFS underneath.
21:27 dcope stickyboy: so do you still do writes in the share?
21:27 stickyboy dcope: Like crazy
21:27 dcope and then the get mirrored to the pool?
21:28 dcope but reads come from the pool?
21:28 dcope is that how it works?
21:28 stickyboy It's a compute cluster, Gluster is the /home :)
21:28 stickyboy Nothing fancy.
21:28 dcope neat :)
21:28 stickyboy Replicated GlusterFS... mount volumes to compute clients.
21:28 stickyboy Users log in and hammer it.
21:28 dcope i think i will test this on a VPS... and then order the servers when i get comfortable with it
21:28 dcope >:D
21:28 stickyboy I tell them "Don't write to network storage!  Use scratch!!!"
21:28 dcope heh
21:28 stickyboy But they are users so... :)
21:29 stickyboy I found one user had one directory with 1.4 million directories inside.
21:29 stickyboy Just wow.
21:29 dcope holy moly
21:30 vu joined #gluster
21:30 stickyboy Man, I gotta split.  It's 12:30 am here and I should *not* be at work.
21:30 stickyboy FML.
21:30 dcope see ya later
21:30 dcope sonicrose: are you still around?
21:49 sonicrose yup
21:50 sonicrose i had lots of problems until i found a somewhat stable config...  http://proof.sonicrose.com
21:50 glusterbot Title: Pingdom Public Reports Overview (at proof.sonicrose.com)
21:51 sonicrose i dont use any raid btw, and you probably shouldnt either
21:51 sonicrose any raid and you're wasting space
21:52 sonicrose i've got ~18TB working volume made out of 3 servers with 6 2TB sata drives each
21:53 sonicrose if i used any raid, other than raid0, i'd have only half that freespace
21:54 sonicrose and using raid0 is not advised since if you lose one disk, you have to resync the whole brick, and rebuild times would get crazy long
21:55 sonicrose why use raid and gluster when gluster supports all the same stripe/replicate functions as RAID, with the added benefit that the replicas can be on other physical servers
21:56 sonicrose this way you can also use gluster to identify failed drives, and once replaced, gluster again will help you restore the data to that disk
22:00 gehaxelt hey
22:00 gehaxelt is it possible to rename a volume?
22:00 diegows joined #gluster
22:32 MacWinner joined #gluster
23:04 hagarth joined #gluster
23:14 dcope hey all, im trying to mount a gluster volume and i run mount but i get no error or anything yet the volume doesn't mount
23:15 gehaxelt dcope, what does the log say?
23:16 dcope 0-glusterfs: XDR decoding error
23:16 dcope followed by 0-mgmt: failed to fetch volume file (key:/volume1)
23:16 gehaxelt dcope, are the appropriate ports opened in the firewall?
23:16 gehaxelt ah
23:17 gehaxelt hmm, does the volume exist?
23:17 dcope oh they're running different versions
23:17 gehaxelt yeah
23:18 gehaxelt I think you've found the same mailinglist entry :D
23:18 cjanbanan joined #gluster
23:20 dcope ugh the ubu packages are all fried
23:20 dcope i got it upgraded thought and mounted
23:22 dcope gehaxelt: so now that i have the mount point on the client... i can read / write directly to the mount and it will balance that across the pool?
23:25 dcope so gluster has load balancing built in and enabled by default? o_O
23:25 cyberbootje1 joined #gluster
23:26 glusterbot New news from newglusterbugs: [Bug 1066529] 'op_ctx modification failed' in glusterd log after gluster volume status <https://bugzilla.redhat.com/show_bug.cgi?id=1066529>
23:27 gehaxelt dcope, afaik it sends a request to all nodes asking for a filehandle
23:27 gehaxelt the first node that answers gets the primacy
23:27 gehaxelt but not too sure.
23:34 dcope gehaxelt: in this case, does the client count as a node?
23:40 gehaxelt uhm, I'm not sure. I don't think so. However, I'm quite new to gluster, too. Maybe anybody else knows this?
23:41 B21956 joined #gluster
23:42 dcope gehaxelt: oh
23:42 dcope well perhaps you'll know this too
23:42 dcope so if a file handle is say on "pool server 2"...
23:42 dcope does the client hand off the connection so the end user hits pool server 2 directly?
23:42 dcope or does client act like a middle man
23:43 dcope ?
23:43 gehaxelt Sorry, out of my knowledge :(
23:44 gildub joined #gluster
23:48 n0de joined #gluster
23:58 elyograg left #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary