Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-02-13

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:39 Slasheri joined #gluster
00:39 Slasheri joined #gluster
00:45 MugginsM joined #gluster
00:56 Gill joined #gluster
01:25 bala joined #gluster
01:27 cyberbootje joined #gluster
01:41 gem joined #gluster
01:57 Gill joined #gluster
01:59 MugginsM joined #gluster
02:10 harish joined #gluster
02:19 doubt joined #gluster
02:26 dgandhi joined #gluster
02:34 Gill joined #gluster
02:39 rafi joined #gluster
02:49 ilbot3 joined #gluster
02:49 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
03:11 badone__ joined #gluster
03:20 rejy joined #gluster
03:30 bharata-rao joined #gluster
03:33 siel joined #gluster
03:33 siel joined #gluster
03:37 RameshN joined #gluster
03:38 wmealing1 joined #gluster
03:39 wmealing1 anyone from rh eng awake at the moment, i have something that needs dealing with.
03:47 wmealing1 nope ok.
03:47 wmealing1 left #gluster
03:49 shylesh__ joined #gluster
03:50 atinmu joined #gluster
03:54 itisravi joined #gluster
04:10 nbalacha joined #gluster
04:11 shubhendu joined #gluster
04:18 ppai joined #gluster
04:20 rafi joined #gluster
04:30 spandit joined #gluster
04:32 schandra joined #gluster
04:33 jiffin joined #gluster
04:45 ppai joined #gluster
04:54 rjoseph joined #gluster
04:56 ndarshan joined #gluster
04:58 anoopcs_ joined #gluster
05:00 gem joined #gluster
05:09 Manikandan joined #gluster
05:10 deepakcs joined #gluster
05:13 DV joined #gluster
05:15 prasanth_ joined #gluster
05:20 deepakcs joined #gluster
05:21 R0ok_ joined #gluster
05:27 atalur joined #gluster
05:30 R0ok_ joined #gluster
05:33 hagarth joined #gluster
05:35 sakshi joined #gluster
05:38 jobewan joined #gluster
05:40 overclk joined #gluster
05:40 dusmant joined #gluster
05:41 kdhananjay joined #gluster
05:55 ramteid joined #gluster
05:56 bala joined #gluster
05:56 itpings hi guys
05:57 anrao joined #gluster
06:01 anrao thank you thank you thank you :D ;)
06:02 m0zes joined #gluster
06:03 bala joined #gluster
06:14 jobewan joined #gluster
06:14 atalur joined #gluster
06:16 smohan joined #gluster
06:16 badone__ joined #gluster
06:19 ACiDGRiM joined #gluster
06:21 dusmant joined #gluster
06:21 ACiDGRiM I've noticed that during heal and sync, throughalput tops out ab around 300Mbps, however a file transfer off a client still achieves 800+Mbps. Is there a way to change heal priority?
06:21 ACiDGRiM this is on a replica 2 volume
06:22 kanagaraj joined #gluster
06:26 ppai joined #gluster
06:28 nshaikh joined #gluster
06:30 bharata_ joined #gluster
06:40 kshlm joined #gluster
06:43 atalur joined #gluster
06:51 raghu` joined #gluster
06:57 anoopcs_ joined #gluster
06:59 anoopcs_ joined #gluster
07:01 anoopcs_ joined #gluster
07:10 bharata__ joined #gluster
07:12 lalatenduM joined #gluster
07:14 nangthang joined #gluster
07:16 jtux joined #gluster
07:18 kovshenin joined #gluster
07:20 Manikandan joined #gluster
07:35 coredump joined #gluster
07:45 schandra joined #gluster
07:48 deniszh joined #gluster
07:53 ppai joined #gluster
08:04 rjoseph|afk joined #gluster
08:04 anoopcs joined #gluster
08:05 anoopcs left #gluster
08:06 anoopcs joined #gluster
08:16 anoopcs left #gluster
08:20 anoopcs joined #gluster
08:29 rafi joined #gluster
08:29 jiffin joined #gluster
08:32 Philambdo joined #gluster
08:35 ws2k3 joined #gluster
08:38 Pupeno joined #gluster
08:38 Pupeno joined #gluster
08:39 LebedevRI joined #gluster
08:39 _polto_ joined #gluster
08:45 gildub joined #gluster
08:50 itisravi joined #gluster
08:53 fsimonce joined #gluster
09:01 hchiramm joined #gluster
09:07 rafi1 joined #gluster
09:08 kanagaraj joined #gluster
09:08 karnan joined #gluster
09:08 ppai joined #gluster
09:14 Manikandan joined #gluster
09:17 tanuck joined #gluster
09:18 atinmu joined #gluster
09:20 bala joined #gluster
09:21 dusmant joined #gluster
09:23 hagarth joined #gluster
09:31 ACiDGRiM I've noticed that during heal and sync, throughalput tops out ab around 300Mbps, however a file transfer off a client still achieves 800+Mbps. Is there a way to change heal priority?
09:31 ACiDGRiM this is on a replica 2 volume
09:32 Manikandan joined #gluster
09:40 ricky-ti1 joined #gluster
09:42 mmance joined #gluster
09:43 mmance does striping without replication increase write throughput?
09:48 gildub joined #gluster
09:54 overclk joined #gluster
09:55 itisravi joined #gluster
09:56 glusterbot News from newglusterbugs: [Bug 1192378] Disperse volume: client crashed while running renames with epoll enabled <https://bugzilla.redhat.co​m/show_bug.cgi?id=1192378>
09:57 atinmu joined #gluster
10:00 liquidat joined #gluster
10:07 hagarth joined #gluster
10:11 xavih joined #gluster
10:21 ndevos mmance: no, not really see ,,(stripe)
10:21 glusterbot mmance: Please see http://joejulian.name/blog/sho​uld-i-use-stripe-on-glusterfs/ about stripe volumes.
10:22 dusmant joined #gluster
10:23 bala joined #gluster
10:26 soumya joined #gluster
10:26 glusterbot News from resolvedglusterbugs: [Bug 1191437] build: issue with update of upstream build from 3.7dev-0.529 to 3.7dev-0.577 <https://bugzilla.redhat.co​m/show_bug.cgi?id=1191437>
10:27 harish joined #gluster
10:27 nangthang joined #gluster
10:35 malevolent joined #gluster
10:35 nbalacha joined #gluster
10:39 pranithk joined #gluster
10:42 _polto_ joined #gluster
10:42 prasanth_ joined #gluster
10:42 rafi joined #gluster
10:46 T0aD joined #gluster
10:46 pranithk guys any one know of web2IRC for gluster?
10:46 ndevos pranithk: maybe https://webchat.freenode.net/ ?
10:47 pranithk ndevos: thanks niels
10:50 malevolent joined #gluster
10:51 xavih joined #gluster
10:56 glusterbot News from newglusterbugs: [Bug 1192435] server crashed during rebalance in dht_selfheal_layout_new_directory <https://bugzilla.redhat.co​m/show_bug.cgi?id=1192435>
10:57 malevolent joined #gluster
10:58 xavih joined #gluster
11:00 pranithk joined #gluster
11:00 jiffin joined #gluster
11:03 badone__ joined #gluster
11:07 neofob joined #gluster
11:08 dusmant joined #gluster
11:08 hagarth joined #gluster
11:08 gildub joined #gluster
11:08 xavih joined #gluster
11:09 Slashman joined #gluster
11:16 xavih joined #gluster
11:20 suman_d joined #gluster
11:21 nangthang joined #gluster
11:38 jiffin joined #gluster
11:41 pranithk left #gluster
11:46 kovshenin joined #gluster
11:48 nangthang joined #gluster
11:51 overclk joined #gluster
11:55 jiffin1 joined #gluster
12:00 T3 joined #gluster
12:03 dusmant joined #gluster
12:04 hagarth joined #gluster
12:08 DV joined #gluster
12:18 malevolent joined #gluster
12:18 ira joined #gluster
12:41 pille joined #gluster
12:52 calisto joined #gluster
12:53 ppai joined #gluster
12:53 xavih joined #gluster
12:59 awerner joined #gluster
12:59 _polto_ joined #gluster
13:06 harish joined #gluster
13:09 nangthang joined #gluster
13:19 xavih joined #gluster
13:25 xavih joined #gluster
13:25 anoopcs joined #gluster
13:30 anoopcs joined #gluster
13:31 malevolent joined #gluster
13:49 overclk joined #gluster
13:52 theron joined #gluster
13:56 _Bryan_ joined #gluster
13:58 plarsen joined #gluster
14:01 asku joined #gluster
14:03 asku left #gluster
14:03 Gill joined #gluster
14:04 dgandhi joined #gluster
14:08 bennyturns joined #gluster
14:10 malevolent joined #gluster
14:10 xavih joined #gluster
14:12 jmarley joined #gluster
14:17 johnmark joined #gluster
14:24 wkf joined #gluster
14:26 coredump joined #gluster
14:30 georgeh-LT2 joined #gluster
14:47 plarsen joined #gluster
14:57 Gill_ joined #gluster
14:58 Gill__ joined #gluster
14:59 shubhendu joined #gluster
15:02 wushudoin joined #gluster
15:03 Pupeno joined #gluster
15:03 Pupeno joined #gluster
15:08 Gill_ joined #gluster
15:12 andreask left #gluster
15:31 sprachgenerator joined #gluster
15:37 kovshenin joined #gluster
15:38 lmickh joined #gluster
15:42 T3 joined #gluster
15:44 theron joined #gluster
15:48 soumya joined #gluster
16:02 pranithk joined #gluster
16:10 pranithk JoeJulian: Who is maintaining debs?
16:10 pranithk JustinClift: Who is maintaining debs?
16:11 pranithk semiosis: Are you maintaining debs?
16:12 pranithk JoeJulian: I need to know if the update script of debian packages run 'glusterd --xlator-option *.upgrade=on -N'?
16:15 jackdpeterson joined #gluster
16:16 pranithk JoeJulian: volfile names for the fuse mounts has changed to contain -tcp, -rdma, they are supposed to be re-generated on upgrade but apparently that is not happening on ubuntu, wondering if you know something. Feel free to mail me, if you don't see me online
16:18 kovshenin joined #gluster
16:22 coredump joined #gluster
16:26 virusuy joined #gluster
16:26 virusuy joined #gluster
16:48 coredump joined #gluster
16:51 hagarth joined #gluster
16:54 theron joined #gluster
16:57 plarsen joined #gluster
17:06 ildefonso joined #gluster
17:16 Rapture joined #gluster
17:16 Kaltax joined #gluster
17:23 Kaltax joined #gluster
17:25 rcampbel3 joined #gluster
17:31 jackdpeterson Hey all, looking for a little bit of assistance -- I have a replica 2 set of gluster servers that appear to have a relatively large number of split-brain GFIDs as well as a selection of directories. in the same condition. Also, getting NFS timeouts on clients (connect via NFS)
17:31 jackdpeterson I'm curious if the split-brain is related to the NFS timeouts
17:35 theron joined #gluster
17:36 theron_ joined #gluster
17:37 kovshenin joined #gluster
17:39 theron_ joined #gluster
17:41 edwardm61 joined #gluster
17:47 calisto joined #gluster
17:47 daMaestro joined #gluster
17:47 mmance ndevos: I did read it, it didn't really give me any real comparison in numbers.  I am capturing raw video and want the best write I can get.
17:52 rotbeard joined #gluster
17:52 theron joined #gluster
17:52 Kaltax left #gluster
17:52 theron joined #gluster
17:53 jobewan joined #gluster
17:55 theron joined #gluster
17:55 theron joined #gluster
17:58 jackdpeterson Gluster 3.6.2 --> NFS kicks itself on volume heal?
17:58 jackdpeterson is that normal behaviour?
18:03 theron joined #gluster
18:05 theron joined #gluster
18:08 Kaltax joined #gluster
18:08 PeterA joined #gluster
18:08 Kaltax left #gluster
18:16 rcampbel3 joined #gluster
18:23 lalatenduM joined #gluster
18:42 johnbot I can't seem to located it but I thought there was some mention of a bug that would cause older gluster volumes to fail after updating to the most recent gluster release. I'm running 3.5 and plan to upgraded to latest this afternoon during maintenance.
18:42 johnbot words of wisdom?
18:45 theron joined #gluster
19:07 ekuric joined #gluster
19:11 rcampbel3 joined #gluster
19:12 Kaltax joined #gluster
19:14 JoeJulian jackdpeterson: via nfs, nothing client-side would cause split-brain. mismatched gfid could only be caused two ways, 1, writing directly to the brick. 2, a netsplit where two clients, 1 connected to each server, both create the same filename.
19:14 jackdpeterson Okay, and the latter -- random nfs timeouts? It appeared that rebooting the client resolved the issue (oddly)
19:14 JoeJulian johnbot: has to do with mixing client and server versions, where you have 3.5 clients and 3.6 servers.
19:15 JoeJulian ... or mixing server versions as well.
19:15 johnbot JoeJulian: thanks for the clarification, I'll now be sure to update all gluster clients and make sure both of my gluster servers are up to date
19:15 JoeJulian jackdpeterson: not sure on the nfs timeouts. That sounds like it's time to do some packet dumps.
19:16 JoeJulian jackdpeterson: Also, look at the /var/log/glusterfs/nfs.log file
19:16 JoeJulian I'm guessing netsplit.
19:17 jackdpeterson netsplit is most likely the case with the split-brain items.
19:18 JoeJulian probably the timeouts too, then.
19:24 theron joined #gluster
19:40 T3 joined #gluster
19:44 deniszh joined #gluster
19:44 saltsa joined #gluster
19:47 T3 hey guys, is there any place where I can acquire better understanding on how gluster consumes cpu, memory and i/o?
19:47 T3 I have a 9-cpu server that just seems overkill to me
19:47 T3 maybe I'll need some optimization
19:48 ACiDGRiM left #gluster
19:49 wushudoin joined #gluster
20:00 rcampbel3 joined #gluster
20:11 mmance joined #gluster
20:18 diegows joined #gluster
20:21 elico joined #gluster
20:24 _polto_ joined #gluster
20:29 madebymarkca joined #gluster
20:40 TheSov joined #gluster
20:44 TheSov hello, I am looking for a HA nfs server for vmware. I was wondering if gluster would be good for that. I know vmware slams on the sync for all writes and wanted to know if gluster mitigates that sync write speed issue
20:45 Gorian joined #gluster
20:53 JoeJulian TheSov: No idea on vmware. Not sure how it manages nfs, if it can even do NFSv3, or just about anything else about vmware.
20:54 JoeJulian I can never figure out why people work so hard to make VMware do what KVM does for free.
20:55 TheSov does kvm do everything vmware does for free?
20:55 TheSov storage vmotion and system vmotion and all that?
20:55 JoeJulian Live migration through libvirt, yep.
20:56 TheSov even the storage files?
20:56 JoeJulian And if you're using clustered storage, migration's not even a need.
20:56 TheSov hmm
20:56 TheSov something to consider
20:56 JoeJulian I mean your image is already available on all of your hypervisors. Why pay licensing.
20:57 JoeJulian That's my veiw, anyway. I know they're still selling a lot of them.
20:57 TheSov but thats the future, for now I need a highspeed NFS server for all our vmware clusters and was wondering if gluster is capable of turning sync off for NFS
20:57 JoeJulian no
20:58 TheSov crud
20:58 JoeJulian If you want something that lies about sync, I'm really not sure what your options are. I know it's not GlusterFS or Ceph.
20:58 TheSov i know zfs does it
20:58 TheSov but I need a HA solution
21:00 PinkFreud joined #gluster
21:01 JoeJulian I know Jeff Darcy compared a bunch of different storage solutions and complained when he found those that lie about having your data safe, but he never disclosed which ones he was complaining about.
21:01 PinkFreud heya all.  having an issue with a gluster setup consisting of 4 bricks and replication factor 2.
21:01 JoeJulian I hate when that happens.
21:01 theron joined #gluster
21:02 badone__ joined #gluster
21:02 TheSov well I actually have no issue with lying or not, this is just for boot drives on systems. the data is on iscsi targets
21:02 TheSov while i want to keep them safe its ok to not have a stateful copy
21:02 PinkFreud We're seeing the following in one of the gluster logs: 0-Veeam-client-0: remote operation failed: No space left on device
21:03 JoeJulian Looks obvious so far. Is it wrong?
21:05 PinkFreud JoeJulian: there do appear to be two bricks that are completely full (the cluster consists of 2x4TB bricks and 2x1TB bricks, the latter of which are full)
21:05 PinkFreud but we're seeing that message in the logs when gluster attempts to do a self-heal.
21:06 PinkFreud along with: 0-Veeam-replicate-0:  entry self heal  failed,   on ...
21:06 JoeJulian So what can I help you with?
21:06 PinkFreud gluster itself isn't out of space, as we still have 1.8TB free on the two 4TB nodes.
21:07 PinkFreud JoeJulian: I'm wondering why A. gluster seems to think that some of our data needs to be healed, and B. why it can't find the space to do so.
21:08 JoeJulian @pasteinfo | PinkFreud
21:08 JoeJulian Hey, where's glusterbot...
21:08 PinkFreud JoeJulian: heh.  I can put anything you'd like to see up on a web-accessable url.
21:08 PinkFreud just tell me what you'd like.
21:10 glusterbot joined #gluster
21:11 JoeJulian @pasteinfo | PinkFreud
21:11 T3 joined #gluster
21:11 PinkFreud JoeJulian: or you can just tell me what you'd like... :)
21:11 JoeJulian yeah, yeah.
21:11 * PinkFreud grins
21:12 JoeJulian glusterbot saves me a ton of typing.
21:12 JoeJulian gluster volume info
21:12 PinkFreud until you have to fix it.  :P
21:12 PinkFreud sure, give me one moment.
21:12 JoeJulian which happens about once a year
21:12 PinkFreud :)
21:14 PinkFreud http://pinkfreud.mirkwood.net/gluster/volinfo.txt
21:15 PinkFreud (unimaginative naming, I know)
21:23 PinkFreud hmm.  JoeJulian, still there?
21:24 JoeJulian Sorry, phone call
21:25 JoeJulian So, PinkFreud, which servers have the smaller disks?
21:25 PinkFreud brick01 and brick02
21:26 glusterbot joined #gluster
21:26 * PinkFreud applies superglue to glusterbot
21:28 rotbeard joined #gluster
21:30 JoeJulian My guess is that you had some sort of an outage. Self-heal happened but created a file that was more sparse on one brick (the sink) than the other (the source). The source, therefore, got full but the sink did not. The sink continued to grow to full, but writes fail on the source, increasing the pending actions count. The self-heal engine tries to correct that but since both bricks are full that can't happen.
21:30 JoeJulian A rebalance /may/ help with that.
21:30 PinkFreud this was a newly rebuilt cluster, actually.
21:31 PinkFreud there is one thing i noticed today, though, that may be contributing.
21:31 PinkFreud /dev/mapper/VGData0-LVData0 1073319948 1073319932        16 100% /data1
21:31 PinkFreud /dev/mapper/VGData0-LVData0 1073336324 1073336304        20 100% /data1
21:31 PinkFreud that's the gluster data on bricks 01 and 02
21:32 PinkFreud looks like they were created with sizes slightly off from each other, which was not intentional.
21:32 PinkFreud still - I'd think gluster would handle that, wouldn't it?
21:32 JoeJulian Nope
21:32 madebymarkca joined #gluster
21:32 JoeJulian If you're writing to a file and the disk gets full, it doesn't move the file someplace else.
21:33 JoeJulian It will create *new* files someplace else though.
21:33 PinkFreud hmmm.
21:33 _polto_ joined #gluster
21:33 PinkFreud are files ever split up between bricks?
21:33 JoeJulian What I would have done is chopped those 4t disks into 4 partitions (I'd use lvm) so distribute would balance the file allocation more evenly.
21:33 JoeJulian no
21:33 PinkFreud yeah, we're using lvm for this.
21:33 JoeJulian If your individual files are expected to exceed brick size, that's the one actual use for the stripe translator.
21:34 PinkFreud so this means that, with our current setup, we're effectively limited to 1TB for a maximum file size.
21:34 JoeJulian yes
21:34 PinkFreud ouch.
21:34 PinkFreud We're not expecting that to happen, but it's still something of a surprise.
21:36 PinkFreud JoeJulian: what happens if the 1TB pair has 500MB free and I copy a 600MB file?
21:36 JoeJulian ENOSPC
21:36 JoeJulian Well
21:36 JoeJulian Not true
21:36 JoeJulian maybe
21:36 PinkFreud ENDSPC?
21:36 JoeJulian ... gah, don't make me think theory on a Friday afternoon...
21:36 PinkFreud lol
21:37 JoeJulian Yeah, pretty sure you'd get ENOSPC.
21:37 PinkFreud oh, ENOSPC
21:37 JoeJulian There's a minimum free disk setting
21:37 JoeJulian once you exceed that, new files will be created on other bricks.
21:38 PinkFreud hmmm.  apparently, we don't have a minimum free set on here, as both 1TB nodes filled to capacity.
21:38 PinkFreud and I'm worried about the data that's on there now.
21:39 mmance can someone show me how to add another server with two bricks to this: http://pastebin.com/NtTSTKFP
21:39 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
21:40 mmance make that http://fpaste.org/185410/14238636/
21:41 PinkFreud JoeJulian: is there any hope that the data at the end of the 1TB bricks isn't completely toast?
21:42 JoeJulian little
21:43 JoeJulian I've got one like that, one brick is 100%, the other is 90%. Data's still being written to the 90% one and self-heals are failing to the 100.
21:43 JoeJulian Once the 90 is full that VM image will be toast.
21:43 PinkFreud are you using replica=2?
21:44 JoeJulian because some doofus created a VM image larger than the brick size.
21:44 JoeJulian yes, replica 2
21:44 PinkFreud er, so the vm image isn't toast yet?
21:45 JoeJulian not yet
21:45 JoeJulian One brick's copy is, but the other is valid for now.
21:46 PinkFreud so, on my gluster fs, the data should be intact then?
21:47 JoeJulian I make no guarantees. If data couldn't be written somewhere that's up to the application to figure out what happens to it.
21:49 T3 joined #gluster
21:50 mmance You can exceed brick size only with striping was my impression.
21:50 mmance I am using a bunch of 80g sata drives in a bunch of dual core desktops to cobble together a video capture platform
21:50 mmance actually having NIC issues that are setting me back some
21:51 mmance going to add some more nodes to see if I can get some speed up
21:51 PinkFreud hmm
21:51 PinkFreud JoeJulian: you mentioned striping could get around this limitation?
21:51 mmance I have 100gb files on my 80gb drives right now
21:55 JoeJulian @stripe
21:55 glusterbot JoeJulian: Please see http://joejulian.name/blog/sho​uld-i-use-stripe-on-glusterfs/ about stripe volumes.
21:56 glusterbot News from newglusterbugs: [Bug 1191486] daemons abstraction & refactoring <https://bugzilla.redhat.co​m/show_bug.cgi?id=1191486>
21:56 glusterbot News from newglusterbugs: [Bug 1191919] Disperse volume: Input/output error when listing files/directories under nfs mount <https://bugzilla.redhat.co​m/show_bug.cgi?id=1191919>
21:56 glusterbot News from newglusterbugs: [Bug 1192114] NFS I/O error when copying a large amount of data <https://bugzilla.redhat.co​m/show_bug.cgi?id=1192114>
21:56 glusterbot News from newglusterbugs: [Bug 1192378] Disperse volume: client crashed while running renames with epoll enabled <https://bugzilla.redhat.co​m/show_bug.cgi?id=1192378>
21:56 glusterbot News from newglusterbugs: [Bug 1192435] server crashed during rebalance in dht_selfheal_layout_new_directory <https://bugzilla.redhat.co​m/show_bug.cgi?id=1192435>
21:56 glusterbot News from resolvedglusterbugs: [Bug 1191437] build: issue with update of upstream build from 3.7dev-0.529 to 3.7dev-0.577 <https://bugzilla.redhat.co​m/show_bug.cgi?id=1191437>
21:59 eightyeight left #gluster
22:01 PinkFreud JoeJulian: heh.  I was just looking at that, actually.  :)
22:01 PinkFreud so it appears what we'd want for this situation is stripe+replicate.
22:03 mmance PinkFreud: let me know what you use, I am also using stripe relicate
22:04 PinkFreud mmance: yeah, we're currently not using that, but it appears to be what we want.
22:04 PinkFreud we're dealing with large VM backups here.
22:05 JoeJulian backups do tend to be the typical use case for stripe.
22:06 PinkFreud yeah.  we didn't realize that.
22:06 aheil joined #gluster
22:06 PinkFreud JoeJulian: assuming my data isn't completely toast, and we still have more free space on the large brick pair than total space on the small brick pair...
22:07 PinkFreud would it be a workable idea to A. shut down the (slightly) smaller 1TB brick, leaving the larger 1TB brick in operation, then migrating the data to the 4TB pair and removing the other 1TB brick from the cluster?
22:08 PinkFreud er, B. then migrating ...
22:08 mmance joejulian: I am not trying to figure out the best way to stripe right now.  I plan to add drives all the time if I can.  If I have just 2xDrives in each machine, how would you setup stripe replicate
22:08 mmance er I should say add servers
22:10 mmance I had 3 machines 2 bricks each as 1 x 3 x 2 = 6 , but it asked for 6 more when I went to add the next server of 2
22:15 mmance joejulian: I am on your webpage about setfattr.  This has to be run on each server for each brick right?
22:15 JoeJulian right
22:16 mmance do I have to unmount all the clients?
22:16 mmance it seems it keeps going back to already part of a volume
22:16 JoeJulian mmance: Yes, 2 replicas x 3 stripes = 6 bricks to add one more distribute subvolume.
22:17 JoeJulian No, don't have to unmount all the clients, but check all the bricks you're trying to use. Even a failed create leaves the xattrs behind.
22:18 mmance what would be a setup to add 1 machines and 2 bricks at a time
22:19 mmance I can only add ditrubute volumes eh? not add to the striped set
22:19 mmance heh
22:20 JoeJulian Right.
22:20 mmance ok, so for me, I have to recreate the volume every time I add to it
22:20 JoeJulian And there's really no good way to change that.
22:20 mmance no, that makes sense
22:20 mmance doesnt mean I have to like it :-)
22:20 PinkFreud bbiaf.
22:21 JoeJulian And the data, because if you have them striped across 3 disks, then you recreate the volume with 5 disks, slices will be missing from 2 of them.
22:22 theron joined #gluster
22:33 aheil left #gluster
22:40 mmance joejulian: is there a nicer way to stop the volumes so you don't have to do that each time you rebuild?
22:44 JoeJulian No. I think they made it difficult on purpose.
22:45 JoeJulian They devs believe that they should never do a blanket delete of everything. They leave that to the admin.
22:45 mmance heh
22:45 mmance I am writing a script now
22:45 mmance pttt
22:45 mmance lame
22:45 JoeJulian I agree with them.
22:46 mmance I never used glusterfs until today
22:46 mmance so
22:46 mmance no um room for comparison
22:46 JoeJulian Can you imagine the visceral hate that would be heaped upon them if they deleted all your data "accidentally"?
22:47 mmance you say that, but thats like getting mad at the ext3 devs for data loss
22:47 mmance until I really get a feel for it, this video capture seems to be a good start
22:48 mmance btw, I doubled my performance by dropping relicate and going full stripe
22:48 mmance 50mb/s to 100mb/s
22:48 JoeJulian amazing how that works. ;)
22:49 mmance I had the replicate go between servers before, I want to try it mirroring locally and see if that keeps the performance
22:49 mmance I would hate to be in the middle of a shoot and lose something cause one of theses cheap 80g drives dies
22:55 mmance back down to 50mb/s
22:55 mmance maybe mdraid?
23:04 JoeJulian Sounds like the best option for what you're describing.
23:04 chirino joined #gluster
23:07 aheil joined #gluster
23:13 T3 joined #gluster
23:21 mmance yeah, and its up to 110mb/s
23:21 mmance thats niec
23:23 mmance be back
23:35 atrius` joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary