Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-09-09

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:11 gildub joined #gluster
00:52 jmarley joined #gluster
01:01 daxatlas joined #gluster
01:05 bala joined #gluster
01:22 lyang0 joined #gluster
01:23 xrandr joined #gluster
01:30 daxatlas joined #gluster
01:47 vimal joined #gluster
01:54 Pupeno joined #gluster
01:56 daxatlas joined #gluster
02:10 haomaiwa_ joined #gluster
02:12 harish joined #gluster
02:17 haomaiw__ joined #gluster
02:37 toordog-work joined #gluster
02:53 mrEriksson joined #gluster
02:57 hagarth joined #gluster
03:08 dastar joined #gluster
03:09 glusterbot joined #gluster
03:15 mrEriksson joined #gluster
03:16 tg2 what can cause a gluster client to hang indefinitely when reading a file from gluster
03:16 tg2 no matter the application...
03:16 tg2 kill -SIGKILL on the pid doesn't even kill it - it just hangs open
03:18 glusterbot joined #gluster
03:20 tg2 it looks a LOT like this
03:20 tg2 http://thr3ads.net/gluster-users/2011/05/479796-hangs-on-accessing-files-on-gluster-mount-3.2.0
03:21 glusterbot Title: thr3ads.net - Gluster users - hangs on accessing files on gluster mount (3.2.0) [May 2011] (at thr3ads.net)
03:25 tg2 just hangs on open("
03:28 bharata-rao joined #gluster
03:34 edong23_ joined #gluster
03:34 natgeorg joined #gluster
03:35 foster_ joined #gluster
03:35 saltsa_ joined #gluster
03:36 necrogami_ joined #gluster
03:38 nbalachandran joined #gluster
03:39 shubhendu_ joined #gluster
03:39 dastar_ joined #gluster
03:42 RobertLaptop joined #gluster
03:42 verdurin joined #gluster
03:42 guntha joined #gluster
03:42 sauce joined #gluster
03:42 VerboEse joined #gluster
03:42 swc|666 joined #gluster
03:45 eryc joined #gluster
03:45 richvdh joined #gluster
03:48 dusmant joined #gluster
03:56 Pupeno joined #gluster
03:58 itisravi joined #gluster
04:04 haomaiwa_ joined #gluster
04:22 haomai___ joined #gluster
04:29 raghu` joined #gluster
04:35 rafi1 joined #gluster
04:35 Rafi_kc joined #gluster
04:35 anoopcs joined #gluster
04:36 jiffin joined #gluster
04:40 cmtime joined #gluster
04:43 atinmu joined #gluster
04:46 ramteid joined #gluster
04:52 lalatenduM joined #gluster
04:53 msp3k joined #gluster
04:55 saurabh joined #gluster
04:57 ndarshan joined #gluster
05:02 hagarth joined #gluster
05:05 ppai joined #gluster
05:23 kdhananjay joined #gluster
05:27 Guest7984 joined #gluster
05:29 RameshN joined #gluster
05:31 RaSTar joined #gluster
05:31 m0zes joined #gluster
05:31 ProT-0-TypE joined #gluster
05:33 Philambdo joined #gluster
05:38 R0ok_ joined #gluster
05:57 ilbot3 joined #gluster
05:57 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
05:59 R0ok_ joined #gluster
06:01 R0ok_ joined #gluster
06:06 ricky-ti1 joined #gluster
06:07 UnwashedMeme joined #gluster
06:18 _dist joined #gluster
06:21 kumar joined #gluster
06:21 jtux joined #gluster
06:22 lalatenduM joined #gluster
06:23 mariusp joined #gluster
06:23 R0ok_ joined #gluster
06:25 _dist anyone awake? :) trying to figure out if a gluster replication volume being 5x slower than native is reasonable
06:31 nshaikh joined #gluster
06:36 lalatenduM _dist, which operation is slow, read or write?
06:37 _dist write only
06:37 _dist and I expected that, but 5x is a bit worse than I imagined
06:38 ekuric joined #gluster
06:39 _dist to be fair when I say native I mean a VM disk image hosted on native vs hosted through libgfapi
06:39 _dist side by side tests in kali through virtio on both (both with same cache setting)
06:40 RaSTar joined #gluster
06:41 bala joined #gluster
06:41 glusterbot New news from newglusterbugs: [Bug 1139506] Core: client crash while doing rename operations on the mount <https://bugzilla.redhat.com/show_bug.cgi?id=1139506>
06:42 hagarth joined #gluster
06:47 LebedevRI joined #gluster
06:49 R0ok__ joined #gluster
06:50 TvL2386 joined #gluster
06:51 karnan joined #gluster
06:52 kanagaraj joined #gluster
06:52 TvL2386 hi guys, I'm creating a fileserver with glusterfs for a replicated volume. I need to be able to write to the volume from those two servers, I don't have the resources, nor should it be necessary performance wise, to create more servers. How would I mount the gluster volume on the two glusterfs servers?
06:55 gildub joined #gluster
06:55 soumya joined #gluster
06:58 ctria joined #gluster
06:59 R0ok_ joined #gluster
07:02 MickaTri joined #gluster
07:03 maxxx2014 tvl2386: like it says in the manuals, mount -t glusterfs <host>:/<volume-name> /<mountpoint>
07:03 jvandewege semiosis: sorry just saw your, rather cryptic, message (,,(path or prefix)) what did you mean by this if you can remember ;-)
07:03 glusterbot semiosis: http://joejulian.name/blog/glusterfs-path-or-a-prefix-of-it-is-already-part-of-a-volume/
07:04 TvL2386 maxxx2014, I was worried about trying to mount before glusterfs was started... Thanks!
07:15 LebedevRI joined #gluster
07:18 m0zes joined #gluster
07:20 ProT-0-TypE joined #gluster
07:20 maxxx2014 on a glusterfs machine that is both client and server, the glusterfs process seems to be made up out of 2 threads. One of them is using 100% of one cpu core. the glusterfs speeds are terrible. Could this thread be limiting the performance ?
07:22 Guest7984 joined #gluster
07:22 hagarth joined #gluster
07:25 atinmu joined #gluster
07:25 fsimonce joined #gluster
07:25 Pupeno joined #gluster
07:26 meghanam joined #gluster
07:26 meghanam_ joined #gluster
07:31 soumya joined #gluster
07:35 nishanth joined #gluster
07:36 coredump joined #gluster
07:40 ricky-ticky joined #gluster
07:49 glusterbot New news from resolvedglusterbugs: [Bug 1128820] Unable to ls -l NFS mount from OSX 10.9 client on pool created with stripe <https://bugzilla.redhat.com/show_bug.cgi?id=1128820>
07:52 TvL2386 alrighty... maybe should be more clear maxxx2014 that I would like to put it in /etc/fstab. I've put it in there but it seems (using nfs hard mount) that the machine will not shutdown and the volume is not mounted on boot
07:53 TvL2386 I want to automatically mount the volume on server boot. This server is both gfs client and server running ubuntu-14.04
07:57 Guest7984 joined #gluster
08:00 atinmu joined #gluster
08:04 TvL2386 semiosis his ppa states to fix this... Need to determine if I want to use ppa's and test....
08:09 TvL2386 not really clear... I think the fix is the init script by Louis Zuckerman which comes on ubuntu-14.04 with the glusterfs-server 3.4.2-1ubuntu1
08:11 ricky-ticky joined #gluster
08:11 glusterbot New news from newglusterbugs: [Bug 1043296] Compression fails on low memory system <https://bugzilla.redhat.com/show_bug.cgi?id=1043296> || [Bug 1115648] Server Crashes on EL5/32-bit <https://bugzilla.redhat.com/show_bug.cgi?id=1115648>
08:12 liquidat joined #gluster
08:15 deepakcs joined #gluster
08:19 glusterbot New news from resolvedglusterbugs: [Bug 1065644] With compression translator for a volume fuse mount I/O is returning input/output error <https://bugzilla.redhat.com/show_bug.cgi?id=1065644>
08:19 MickaTri Hi, what's the difference between a striped volume and a distributed striped volume ?
08:20 ndevos ~stripe | MickaTri
08:20 glusterbot MickaTri: Please see http://joejulian.name/blog/should-i-use-stripe-on-glusterfs/ about stripe volumes.
08:21 hagarth joined #gluster
08:30 MickaTri thx !
08:37 maxxx2014 tvl2386: add netdev to fstab
08:40 TvL2386 maxxx2014, not working. This is my entry: thishost:/log   /mnt/log  nfs  defaults,_netdev,mountproto=tcp,hard,vers=302
08:40 TvL2386 maxxx2014, I've now mounted using fstype glusterfs and it works
08:40 TvL2386 maxxx2014, thishost:/log   /mnt/log  glusterfs  defaults,_netdev02
08:44 giannello joined #gluster
08:48 atinmu joined #gluster
08:51 harish joined #gluster
08:53 mariusp joined #gluster
09:04 a1 joined #gluster
09:06 mariusp joined #gluster
09:08 mariusp joined #gluster
09:19 glusterbot New news from resolvedglusterbugs: [Bug 1120151] Glustershd memory usage too high <https://bugzilla.redhat.com/show_bug.cgi?id=1120151>
09:25 jmarley joined #gluster
09:28 haomaiwa_ joined #gluster
09:31 mariusp joined #gluster
09:32 mariusp joined #gluster
09:34 harish joined #gluster
09:43 jvandewege joined #gluster
09:43 haomai___ joined #gluster
09:52 atinmu joined #gluster
09:59 mariusp joined #gluster
10:11 glusterbot New news from newglusterbugs: [Bug 1139598] The memories are exhausted quickly when handle the message which has multi fragments in a single record <https://bugzilla.redhat.com/show_bug.cgi?id=1139598>
10:13 sputnik13 joined #gluster
10:31 mariusp joined #gluster
10:32 Slashman joined #gluster
10:42 xleo joined #gluster
10:58 lalatenduM joined #gluster
11:04 kkeithley1 joined #gluster
11:06 kkeithley1 left #gluster
11:07 kkeithley1 joined #gluster
11:10 bene2 joined #gluster
11:13 mrEriksson Hello! I did a volume replace-brick to move a brick to another server. Everything worked and it copied all data to the new brick location, but after that, the entire gluster environment turned unavailable. The clients says: Transport endpoint is not connected
11:13 mrEriksson Any ideas what this could be due to?
11:14 chirino joined #gluster
11:22 diegows joined #gluster
11:24 vimal joined #gluster
11:28 mariusp joined #gluster
11:34 elico joined #gluster
11:37 mariusp joined #gluster
11:38 pkoro joined #gluster
11:47 edwardm61 joined #gluster
11:52 altmariusp joined #gluster
11:53 Philambdo1 joined #gluster
11:54 Bardack joined #gluster
11:54 giannello joined #gluster
11:55 mariusp_ joined #gluster
11:59 soumya joined #gluster
12:00 ira joined #gluster
12:01 VerboEse joined #gluster
12:02 meghanam joined #gluster
12:02 meghanam_ joined #gluster
12:02 ndevos reminder: gluster bug triage meeting starting *now* in #gluster-meeting
12:03 todakure joined #gluster
12:04 jdarcy joined #gluster
12:04 swc|666 joined #gluster
12:04 sauce joined #gluster
12:05 VerboEse joined #gluster
12:05 lalatenduM joined #gluster
12:08 soumya joined #gluster
12:08 soumya joined #gluster
12:11 kdhananjay joined #gluster
12:17 edward1 joined #gluster
12:23 itisravi_ joined #gluster
12:30 JonathanD joined #gluster
12:30 _Bryan_ joined #gluster
12:40 LHinson joined #gluster
12:40 mojibake joined #gluster
12:41 RaSTar joined #gluster
12:42 glusterbot New news from newglusterbugs: [Bug 1139682] statedump support in glusterd <https://bugzilla.redhat.com/show_bug.cgi?id=1139682> || [Bug 1005616] glusterfs client crash (signal received: 6) <https://bugzilla.redhat.com/show_bug.cgi?id=1005616>
12:44 rwheeler joined #gluster
12:45 rgustafs joined #gluster
12:46 sputnik13 joined #gluster
12:48 rwheeler joined #gluster
12:55 theron joined #gluster
12:57 theron_ joined #gluster
12:59 julim joined #gluster
13:03 chirino joined #gluster
13:06 bene2 joined #gluster
13:10 deeville joined #gluster
13:15 hagarth joined #gluster
13:18 lmickh joined #gluster
13:18 vimal joined #gluster
13:19 MickaTri Hi, have you heard about Nexenta ? Is it like Glusterfs ?
13:22 julim joined #gluster
13:27 recidive joined #gluster
13:30 bennyturns joined #gluster
13:34 tdasilva joined #gluster
13:36 foobar MickaTri: not even close...
13:41 vimal joined #gluster
13:42 guntha joined #gluster
13:47 deepakcs joined #gluster
13:47 jmarley joined #gluster
13:55 LHinson joined #gluster
13:55 daxatlas joined #gluster
13:56 failshell joined #gluster
14:02 LHinson1 joined #gluster
14:18 B21956 joined #gluster
14:19 B21956 joined #gluster
14:23 wushudoin| joined #gluster
14:23 shuky joined #gluster
14:24 shuky Hi everyone, I have a question regarding 4 gluser servers
14:24 shuky I’m installing 4 servers but only one of the servers will be uploaded new files that I want to be syncs to other 3 servers
14:25 shuky I want to know If I can close the ports that on the server that has the files cause I don’t want him to be synced from the other 3
14:31 sputnik13 joined #gluster
14:36 SpComb shuky: every glusterfs client needs to be able to connect to all glusterfsd server
14:36 SpComb *servers
14:37 shuky I want all 4 servers to be on the same pool, but only one server upload files that I want to distributed to other 3
14:37 shuky and I want to limit these 3 server of accessing the 4th server
14:37 shuky that one that is uploading the files
14:38 shuky for the configuration part I can have all ports open
14:38 shuky but if I want then to close the port for the 4th server so basiclly the other 3 can’t update him
14:38 shuky will it work?
14:40 lalatenduM joined #gluster
14:40 bennyturns joined #gluster
14:44 SpComb shuky: the glusterfs client writes directly to all servers in the pool
14:47 shuky SpComb: and if I have only one client that does the writing can I make sure that other clients won’t write to hime
14:49 SpComb shuky: ah, good point indeed, I asked about that earlier myself as well. You can use `volume set ... auth.allow ...` to restrict which clients can read/write, but there is no separate read-only set of clients
14:50 nishanth joined #gluster
14:50 SpComb best you can do is do the glusterfs mount as `... -o ro`
14:50 kkeithley_ it can probably made to work, but you're not using Gluster as intended. When you create a replica volume, clients write to all the servers in the replica set. There's no communication between the servers, the client does all the writes. Four copies is going to slow down the ingest a lot.
14:50 SpComb imo it would be a useful feature to define a set of "readonly" clients
14:51 kkeithley_ If you want all the ingest to occur through a single server, start by creating a volume with a single brick. Do your ingest. Then add the other three volumes
14:51 jobewan joined #gluster
14:51 kkeithley_ then add the other three bricks.
14:51 SpComb I read this as having four nodes, each acting as both a brick and a client. One of the nodes should act as a writing client, and the others as read-only clients
14:52 toordog-work shuky if you want to support concurrent access to a file system, you need to have a cluster lock system. like Cluster LVM
14:52 shuky and if I set it up that one client does all the writing I can close the port on that server
14:52 shuky ?
14:52 toordog-work but that require a real cluster
14:53 shuky my problem is with IT that won’t allow access to the writing client
14:54 shuky and I want to make sure the client can write to all servers (one of them is local and the rest are accsesible) but no one can write to this client
14:54 vimal joined #gluster
14:54 kkeithley_ another alternative would be to create a one-brick "primary" cluster for ingest. Use geo-rep to replicate to a three-brick read-only "slave"
14:56 toordog-work or use GFS instead
14:58 shuky and after I setup the geo-rep the slaves won’t need to comunicate with the master only the master will push to the slaves?
15:00 MickaTri Hi, i want to set up a ProxMox server, do i need to install glusterfs on it ?
15:03 elico joined #gluster
15:03 Gabou I've asked few days ago without any answer, how does gluster sync? Is it over tls or something? I mean, is the protocol used under a cipher?
15:04 skippy Gabou: Gluster does not encrypt by default, but you can enable that.
15:05 skippy Gabou: http://blog.gluster.org/author/zbyszek/
15:08 rotbeard joined #gluster
15:11 getup- joined #gluster
15:11 Gabou thx skippy
15:12 shuky joined #gluster
15:12 daMaestro joined #gluster
15:14 mariusp joined #gluster
15:16 plarsen joined #gluster
15:16 MickaTri Can we use redundant feature of Glusterfs with ProxMox ?
15:17 MickaTri Or Proxmox allow just one IP for glusterfs and then there is no H-A ?
15:18 altmariusp joined #gluster
15:25 mariusp joined #gluster
15:29 coredump joined #gluster
15:30 failshel_ joined #gluster
15:30 MickaTri Any answer ?
15:32 msmith__ joined #gluster
15:37 fignews left #gluster
15:57 failshell joined #gluster
15:58 getup- joined #gluster
16:02 vimal joined #gluster
16:05 sputnik13 joined #gluster
16:08 Gabou [862356.324926] XFS (sda4): unknown mount option [errors].
16:08 Gabou We can't have remount error in read only with XFS ?
16:10 chirino joined #gluster
16:15 mariusp joined #gluster
16:20 PeterA joined #gluster
16:22 julim joined #gluster
16:30 soumya joined #gluster
16:32 dtrainor joined #gluster
16:38 LebedevRI joined #gluster
16:58 hagarth joined #gluster
17:00 elico joined #gluster
17:03 elico joined #gluster
17:05 elico joined #gluster
17:06 nbalachandran joined #gluster
17:08 elico joined #gluster
17:09 chirino joined #gluster
17:13 elico joined #gluster
17:14 vimal ndevos, are you online?
17:15 elico joined #gluster
17:18 _dist joined #gluster
17:20 _dist JoeJulian: I'm now running VMs on a new 3.5.2 volume, I'm finding that volume heal info is "fixed" but takes 7-9 seconds to run, but unless I'm misunderstanding it statistics heal-count does what heal info used to
17:21 shuky joined #gluster
17:22 ndevos hi vimal!
17:23 bennyturns joined #gluster
17:23 sijis_ joined #gluster
17:34 _dist also, when I force heal circumstances, it says "possibly undergoing heal"
17:38 sijis_ going from 3.4.2 to 3.4.5 is a simple yum update?
17:38 sijis_ should i do the glusters nodes first, then the clients?
17:39 JoeJulian All of them are nodes.
17:39 JoeJulian Even a printer is a "node".
17:40 semiosis a bridge is not a node.  learned that in CCNA prep
17:40 semiosis (never took the exam tho)
17:40 sijis so how should i rephrase it... do the gluster servers first, then the clients?
17:40 B21956 joined #gluster
17:41 JoeJulian semiosis: Correct. A bridge is not an endpoint. Only endpoints are nodes.
17:41 JoeJulian sijis: clients before servers.
17:41 semiosis what?
17:41 JoeJulian what?
17:41 semiosis servers before clients
17:42 JoeJulian But... hmm...
17:42 semiosis get coffee
17:42 JoeJulian Oh, right.
17:42 JoeJulian I guess...
17:42 JoeJulian I already had one double today... maybe I do need another.
17:43 JoeJulian I'm a little distracted lauging at the "new" features announced with the iphone 6.
17:43 JoeJulian laughing even.
17:43 bala joined #gluster
17:43 semiosis the snark on twitter is hilarious
17:44 sijis assuming, i just have 2 gluster servers. i would update gluster1 - reboot, then when its back up do gluster2- reboot
17:44 sijis then do the clients
17:44 dtrainor joined #gluster
17:44 _dist JoeJulian: has anyone written anything about the new "statistics" and slower heal info, I'm testing replication healing on 3.5.2
17:44 semiosis sijis: that should be OK, but no guarantees
17:44 JoeJulian Make sure everything's healed, do one. Make sure everything's healed, do the other.
17:45 JoeJulian _dist: I haven't seen anything yet.
17:45 JoeJulian _dist: fyi, the beta for 3.5.3 is expected this week.
17:45 _dist ok, well, it looks like "statistics heal-count" has the same issue as heal info used to, and heal info works, but is chuggy (if you want to do a watch) and there's still no way to get an ETA
17:46 _dist but at least false positives seem to be gone
17:46 sijis healed by looking at 'gluster volume heal <volname> info'?
17:47 JoeJulian ETAs are difficult at best. Maybe even impossible.
17:47 _dist "/images/102/vm-102-disk-1.qcow2 - Possibly undergoing heal" seems like gluster isn't sure? :)
17:47 _dist ok forget ETA, how about % of source file crawled
17:47 JoeJulian Yeah, that seems like a stupid statement.
17:48 JoeJulian OMG!!! Apple Pay IS Google Wallet!
17:48 _dist really?
17:49 sijis JoeJulian: verify healed by looking at 'gluster volume heal <volname> info'? (not sure if you saw)
17:49 JoeJulian I didn't scroll back today.
17:49 _dist sijis: that's how I do it
17:51 _dist JoeJulian: as another comment, I've always disliked how it's really hard to tell what node is healthy and what node isn't, the file just shows up on all three. But hey, so far so good
17:51 kkeithley1 joined #gluster
17:52 sijis _dist: so if you see 0 in the entires, its good.
17:53 _dist sijis: you should probably check three things to be "safest"
17:53 _dist gluster volume heal volname info
17:53 _dist gluster volume heal volname info split-brain
17:53 _dist gluster volume heal volname info heal-failed
17:54 LHinson joined #gluster
17:54 sijis _dist: thanks for the tip
17:59 _dist JoeJulian: there's no data transfer with this "possibly healing" message
18:00 kanagaraj joined #gluster
18:00 JoeJulian ugh
18:06 _dist the trusted afr xattr keeps cycling between 0x0's and 0x....1 every few seconds on all three servers for all three bricks
18:07 fignew joined #gluster
18:08 _dist hmm, it does look like it finished. I guess I'd have to conclude the almost nil network transfer is due to gluster taking a long time to find the out of sync data that needs t obe sent
18:14 qdk joined #gluster
18:18 bennyturns joined #gluster
18:22 R0ok__ joined #gluster
18:22 JoeJulian There is a lot of read and compare checksums without data transfer, similar to rsync.
18:24 mojibake joined #gluster
18:24 _dist JoeJulian: Sounds good, also any idea what this error message means? https://dpaste.de/sOKe
18:24 glusterbot Title: dpaste.de: Snippet #282351 (at dpaste.de)
18:25 _dist it appears to have no negative impact, but still
18:25 _dist happens only at storage migration initiation
18:27 partner hmm, i would need to copy/move bunch of data from datacenter to another. i was wondering, speedwise, whether i should replace-brick, add replica 2 (currently distributed) or somehow perhaps rsync in advance and let the self-heal take care it later on.. ?? i'm talking about quarter petabyte of data here
18:29 partner need to testdrive and see which makes sense
18:30 R0ok__ joined #gluster
18:30 _Bryan_ joined #gluster
18:32 partner was just wondering if anybody had prior experience with such moves
18:34 R0ok___ joined #gluster
18:36 R0ok___ joined #gluster
18:37 longshot902 joined #gluster
18:37 R0ok___ joined #gluster
18:39 R0ok___ \q
18:40 R0ok__ joined #gluster
18:42 R0ok___ joined #gluster
18:43 R0ok___ joined #gluster
19:18 theron joined #gluster
19:25 LHinson1 joined #gluster
19:37 clyons joined #gluster
19:42 elico partner: what moves?
19:45 dtrainor joined #gluster
19:53 the-me *with my debian developer hat on* someone using here glusterfs on Debian?
19:54 semiosis hi patrick!
19:54 semiosis we have some debian users here, i think partner is one, and _dist
19:56 semiosis the-me: whats up?
19:56 the-me semiosis: just want some feedback :)
20:01 _dist the-me: I am using glusterfs 3.5.2 on debian for file storage and vm storage in prod
20:03 the-me _dist: from the offical debian packages (currently in testing/sid)? any not upstream related faults?
20:03 _dist gluster.org ones
20:03 _dist I think debian is still 3.3x ?
20:03 _dist maybe not though, I'm running wheezy
20:04 the-me yeah because this is a release
20:05 _dist semiosis: I'm finally maxing 10gbe out on storage migration of vms, with less than 6% io-wait
20:05 the-me _dist: here is an overview: https://packages.qa.debian.org/g/glusterfs.html
20:05 glusterbot Title: Debian Package Tracking System - glusterfs (at packages.qa.debian.org)
20:06 _dist the-me: what I'm running http://download.gluster.org/pub/gluster/glusterfs/3.5/3.5.2/Debian/wheezy/README
20:06 elitecoder joined #gluster
20:06 _dist 3.2.7 would not make me happy :)
20:07 elitecoder Hi guys. I have a two node and two client setup. They're in different locations in the data center. The bandwidth is costing money, but it's free if the clients prefer to talk to a gluster box in their own physical location / subnet
20:07 the-me nobody requested a backport, yet :=)
20:08 elitecoder Is there a way to get gluster clients to "prefer" a certain box, if it's available?
20:08 the-me semiosis: *we* could work on it. you are a bit inactive on maintaining the deb packages ;)
20:09 semiosis yes i have been remiss in sending you patches
20:09 _dist elitecoder: nothing comes to mind that isn't hacky, like blocking client egress except when you need it (I'm assuming you're using replication)
20:09 skippy semiosis recommends building volumes with mulitple bricks, because moving bricks is easier/faster than adding new bricks. Does it make sense to make one brick a really small LVM just to get it into the volume, and then lvextend it later as needed?
20:10 semiosis the-me: are you still using SVN for the packages?  i need to get the server info again, and have my password reset
20:10 skippy that is, one large brick, and one small (couple hundred meg) brick?
20:10 semiosis skippy: best practice is for all bricks to be same size
20:10 elitecoder Latency would be slightly lower on the closer boxes.
20:10 elitecoder Not sure if there's anything for choosing based on that
20:11 semiosis elitecoder: gluster tries to be smart about that, but idk how you can influence the decision.  check ,,(options) maybe there is something
20:11 glusterbot elitecoder: See config options and their defaults with 'gluster volume set help'; you can see the current value of an option, if it has been modified, with 'gluster volume info'; see also this page about undocumented options: http://goo.gl/mIAe4E
20:11 the-me semiosis: yes. just mail me: pmatthaei@debian.org then I could set you a new password
20:11 _dist elitecoder: we don't know anything about your volume setup
20:11 semiosis the-me: ok thanks
20:11 elitecoder _dist: Is there something specific that would be helpful?
20:14 elitecoder I can grab the info
20:14 skippy thanks, semiosis
20:16 semiosis yw
20:16 DV_ joined #gluster
20:19 toordog-work the file located in .gluster which has a hash name, are they a copy of the real file in the volume?  *Is it taking twice the space ?
20:23 semiosis toordog-work: it is a hard link, so it does not use any more space.  you can use the ,,(gfid resolver) to locate the file
20:23 glusterbot toordog-work: https://gist.github.com/4392640
20:24 elitecoder semiosis: thanks for the tip regarding the gluster help. I looked through all the options. I was glad to find encryption. But didn't find anything for specifying how the load is distributed
20:24 toordog-work thanks
20:25 toordog-work make sense. A good usage of hardlink.
20:25 partner elico: not sure if you joined after my lines (i've ignored all the gazzillion joins and parts), was just wondering out loud how should i move a rather large set of data into another datacenter most efficiently
20:26 elico partner: ok..
20:26 elico it depends if the data is being used live or not..
20:27 elico I was in and out for breaks etc.
20:27 elitecoder partner: rsync w/ compression or just make a huge ass archive and compress the shit out of it, then scp it : D
20:27 partner not all of it all the time no, there's 250 TB of it to be moved
20:27 elitecoder LOL
20:27 elitecoder I would arrange for physical storage to be attached, and transferred.
20:28 partner that would provide the best bandwidth yes
20:28 elitecoder then from there you could sync it over the net
20:28 elico well 250TB is quite a lot..
20:29 semiosis i moved less than 10 TB from rackspace to AWS over the net and it took something like three days
20:29 elitecoder lol suck it rack space
20:29 semiosis used gluster-nfs server in aws, linux nfs client in rackspace, connected them over an openvpn tunnel
20:29 partner its only half rack to be moved BUT i've been hinted it should stay online all the time, hence wondering the options
20:29 elico semiosis: why would you do that exactly??
20:29 elitecoder I moved 5 servers from there to DO
20:29 semiosis elitecoder: migrating from rackspace to aws
20:29 semiosis s/elitecoder/elico/
20:29 glusterbot What semiosis meant to say was: elico: migrating from rackspace to aws
20:30 elico nice semiosis
20:30 elitecoder lol nice
20:30 partner i am merely just performing a physical location change here, nothing external is nor will be involved
20:31 elico Well you can buy some 10Gb lines for the time of the migration but it will cost...
20:32 theron joined #gluster
20:32 elico s/buy/buy or hire
20:33 theron joined #gluster
20:33 elitecoder partner: Are you saying you're not willing to get phsyical drives to store it and ship it?
20:33 theron joined #gluster
20:34 partner elitecoder: not sure what you mean.. i have 12 servers providing this storage/volume that needs to move and yet be available pretty much all the time during the move
20:34 elitecoder partner: Copy all to external storage, ship it, then rsync to update it afterwards.
20:35 elitecoder Unless of course, transferring it over the internet is fast enough/cheap enough. And it may be.
20:35 partner well that's what i'm trying to figure out here
20:35 elitecoder Sorry
20:36 partner 250 TB would be shitloads of external usb-drives :)
20:36 elitecoder yeah no shit
20:36 elitecoder Where are you moving to? just curious
20:36 partner roughly 600 million files, i'm sure rsync will love syncing it
20:37 elitecoder Yuuuuup
20:37 partner to another datacenter, somehow utilizing old/new/temporary hardware
20:38 elitecoder Yeah but what company?
20:38 elitecoder softlayer, aws, ...
20:38 partner my options/questions were mostly related to should i set up similar storage to target and either do replace-brick operations, set up replication between or maybe something else
20:39 partner its all internal, our hardware and stuff, and my worry
20:40 semiosis partner: def not replication.  maybe geo-replication
20:40 elico partner: depends on your funds
20:41 theron joined #gluster
20:41 elico if you have funds only for partial thing then you will need to invest in lines more
20:41 semiosis partner: not replace-brick either.  you dont want to be replicating over a WAN link, latency is too high
20:41 elico 250 TB should be about how many servers?
20:41 partner semiosis: now we're talking :) and i was afraid to hear this, too
20:41 partner elico: its 12 servers thought there's still plenty of free space
20:42 partner and other volumes aswell, just concentrating on the biggest one here
20:42 elico how many u each of them?
20:42 elico s/u/u's/
20:42 glusterbot What elico meant to say was: how many u's each of them?
20:43 elico if the servers are not that expensive you should consider geo-replication for the task using an exact copy of the hardware.
20:43 partner semiosis: i'm yet to test how long would it take with 1 TB of data between few of our datacenters. and no, i don't have numbers yet to the new datacenter either as its being build up as we speak
20:44 elico hmm
20:44 LHinson joined #gluster
20:44 partner yeah, maybe geo-replication, that is something i have not yet even thought
20:45 semiosis geo-replication is rsync managed by gluster
20:45 partner yeah that i know, haven't used it ever, probably why it did not occur to me earlier
20:45 elico I did not tried geo-replication yet.
20:46 elico I do know it helps with lots of things.
20:46 partner and actually i don't have to be 100% online with the data, i only need to provide the very latest files, older stuff could be unavailable for some time
20:47 partner and to achieve that.. in theory setting up a internal (to gluster) disk-free-space limit for bricks should prevent any write attempts to full ones. or rather in practice, this is how it runs currently anyways, rebalance is way too slow
20:48 partner at least it works with bricks online, no idea what happens when they go offline :o
20:50 partner semiosis: btw never said thanks for the packages last time, i'm way too much on and off here so better late than never, thanks :)
20:50 semiosis yw!
20:51 mariusp joined #gluster
20:54 dtrainor joined #gluster
20:57 partner personally feeling sad not being able to contribute much for the community, hence being very grateful for all you guys for the work you're doing
20:59 partner anyways, not yet moving anything, just preparing for it and looking for options
21:00 partner i would probably personally select the highest bandwidth and schedule a downtime for moving the whole gluster, we're only 3 hours apart in between the datacenters, added with disassembly and reassembly but that of course comes with certain risks too
21:05 julim joined #gluster
21:20 zerick joined #gluster
21:21 failshel_ joined #gluster
21:27 andreask joined #gluster
21:34 maxxx1024 joined #gluster
21:49 MacWinner joined #gluster
21:53 sputnik13 joined #gluster
21:59 dtrainor joined #gluster
22:01 asku joined #gluster
22:08 elitecoder left #gluster
22:22 fignew joined #gluster
22:22 mariusp joined #gluster
22:53 recidive joined #gluster
23:04 fignew joined #gluster
23:04 fignew joined #gluster
23:18 dtrainor joined #gluster
23:21 _Bryan_ joined #gluster
23:23 mariusp joined #gluster
23:30 dtrainor joined #gluster
23:46 calum_ joined #gluster
23:47 bala joined #gluster
23:54 plarsen joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary