Camelia, the Perl 6 bug

IRC log for #gluster, 2013-02-02

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:00 elyograg the directories right under /bricks are the filesystem mount points.
00:01 elyograg it's replica 2, 8 bricks on each server.  each brick is 5 terabytes.
00:06 partner if its just one huge root mount i would go and reinstall all the servers.. just about even for testing as it so often turn into production all the sudden
00:07 clusterflustered i think i made an error, the idea of just creating a folder /exports/brick on an already mounted system wont work will it?  i need a fresh disk. do i also need it to be in the xfs format? the servers i am using have no extra space on them
00:08 partner it will work
00:08 partner it will just use the disk that is available on that mountpoint
00:09 clusterflustered does it matter if that is a system disk?
00:10 partner not for gluster but if you fill the volume you will also fill the whole root and your server is going to go most likely nuts sooner or later
00:11 partner in case everything is on the same disk.. not good. it will simply prevent the whole operating system working when it cannot create files etc.
00:11 partner for testing you can do whatever
00:12 partner i would even encourage to break it so that you can learn what happens and how to recover from it
00:13 ultrabizweb joined #gluster
00:13 elyograg you may notice that my brick definitions all end in /testvol , but the filesystems are mounted on directory level higher.  this is to keep gluster from filling up the root filesystem in case a brick doesn't get mounted.
00:13 clusterflustered ah ok, so giving a brick its own partition is more cautionary than anything, is that what im getting?
00:14 elyograg gluster is perfectly happy to create directories that don't exist when it is creating a volume, but when firing up an existing one, the directory has to already exist.
00:15 partner clusterflustered: pretty much. just consider the possibility that a user or service fills up your gluster volume and by doing it would bring down all your 4 servers along
00:16 partner now if the mount for bricks (and therefore for the volume) would be on its own disk, it would just fill up the volume, not the server root disk
00:16 harold[MTV] joined #gluster
00:17 maxiepax joined #gluster
00:17 partner in my early testings, actually on the very first one i did screw up my brick naming and ended up having server1 10 GB and server 3 GB (latter on root disk) - funny things happened when i started filling up the disk with dd..
00:18 partner i managed to just go logically server1:/brick1 server2:/brick2 while both were supposed to be brick1..
00:18 partner an excellent training scenario.. i had to immediately learn how to do replace-brick operation
00:19 partner next obvious one is of course expanding it, should probably document that down, IMO good hands-on stuff for any newcomer
00:19 elyograg i filed a bug that would not allow volume creation to proceed if one or more of the brick paths deosn't exist.  at the very least it should inform you of the  problem and ask if you're sure.
00:19 partner elyograg: amen..
00:20 partner with my short experience i think gluster does very many things without giving a warning while the asked operation is obviously very wrong
00:21 elyograg i filed a different bug that would keep a volume from starting if any of the bricks are located on the root filesystem, unless you define a volume property saying that's ok.
00:22 partner another +1
00:23 partner it might be very obvious for many but it is dangerously not obvious for many more
00:23 elyograg train time.
00:26 partner i don't know if there is anything i can do to improve the situation, i do have fresh pair of eyes and not much experience so i am in excellent position of pointing out several things, please don't get me mad but do instruct on what to do to correct these
00:26 partner uh, don't get mad at me..
00:28 partner community pages surely needs updating, for that i could create an account, they lack clear required steps (such as starting the volume unless m0zes fixed those already)
00:29 partner bunch of faq setups perhaps illustrated, i guess i could steal my internal visio graphs for the community benefit
00:41 clusterflustered sorry, we had some doo doo ehad trying to use nomachine from a mac.
00:41 clusterflustered i dont get mad, i get grateful, i appreciate all your input here.
00:41 clusterflustered i feel like im the problem causer/time waster with all my questions
00:42 puebele1 joined #gluster
00:50 bluefoxxx joined #gluster
00:50 bluefoxxx Will gluster self-heal without quorum?
00:51 bluefoxxx i.e. can I use replication = 2 and expect that if a server fails, fscks itself, comes back with damaged files, it will correctly self-heal from the other server?
00:51 bluefoxxx or do I need 3 servers so that 2 are in agreement about what is right and can overrul the third (and besides, have quorum of simple majority)
00:54 partner bluefoxxx: what version you're running?
00:56 bluefoxxx partner, none yet
00:56 bluefoxxx I'll have whatever's available on RHEL6 I guess.
00:57 bluefoxxx can I use 3.3?
00:57 partner works fine one my 3.3.1
00:57 bluefoxxx is there a repository for GlusterFS 3.3?
00:58 partner http://download.gluster.org/pu​b/gluster/glusterfs/3.3/3.3.1/
00:58 glusterbot <http://goo.gl/ZO2y1> (at download.gluster.org)
00:59 bluefoxxx http://download.gluster.org/pub/gluster​/glusterfs/3.3/3.3.1/EPEL.repo/epel-6/ ah :)
00:59 glusterbot <http://goo.gl/2rK7a> (at download.gluster.org)
00:59 bluefoxxx then i"ll use this.
00:59 bluefoxxx there's stuff for EPEL7, is it RHEL7 beta time yet?
00:59 bluefoxxx I am not a redhat fan but I am highly interested in RHEL7
01:00 bluefoxxx Rather Debian, but the only attractive distribution is Ubuntu, and I have issues with some of Ubuntu's behavior (the lack of Systemd particularly here)
01:00 partner i'm on debian so no comment on other platforms
01:00 bluefoxxx nod
01:01 bluefoxxx I just go where the technology I need is.  My personal preference leans me to migrate when the technology is where I want to be, or just goes on my desktop.  :)
01:01 bluefoxxx anyway
01:01 bluefoxxx so I only need 2 storage clusters for this to work
01:01 bluefoxxx should I STONITH them if possible?
01:02 bluefoxxx partner:  this is going to be a less than happy setup.
01:02 partner you can even start with one but that wouldn't be too distributed
01:03 bluefoxxx The storage servers will be multi-role as a streaming media server (Wowza), encoder (to transcode high-quality MP4 files into low-medium-high quality), and storage :|
01:04 bluefoxxx I would insist on separating them, but the assets are expensive
01:04 bluefoxxx $40,000/year expensive.
01:05 bluefoxxx I'm interested in geo-clustering or whatever it's called where we don't write in collocation A, just expose for read; but where in on-site HOME location we have a GlusterFS cluster that stays consistent and that pushes out to Collocation A
01:05 partner i'm not the best expert here so i would suggest you to visit next week, its saturday already in europe and US dudes are most likely enjoying their well-earned beers as we speak :)
01:05 bluefoxxx nod
01:06 bluefoxxx I have a refrigerator full of beer I brewed weeks ago
01:06 partner nice
01:07 partner geo is basically just rsync controlled by gluster as its aware of the changed data
01:07 bluefoxxx nod
01:07 bluefoxxx that sounds like pain with millions of huge files though.
01:07 bluefoxxx having tried to rsync 30 million small files I would know :)
01:08 bluefoxxx (it doesn't work)
01:08 partner yeah, sounds like a pain to me
01:09 partner i'm also doing some design decisions here on our to-be-renewed storage service, splitting files into multiple levels of subdirs based on file hash
01:09 partner i was told its a very anti-pattern of glusters algorithms and will affect the geo-replication badly
01:10 partner anyways, you have some serious questions so don't listen to me but get back next week for the pros to be present
01:10 bluefoxxx nod
01:14 partner i'm also going to question the worth of our files as we are pretty much meeting the same costs.. is it really worth trying to mirror the internet...
01:15 clusterflustered so ext4 wont work at all with gluster is what im getting
01:15 clusterflustered i cant just create a folder on my current disk and go, can i?
01:16 partner clusterflustered: for what i know there are some issues around it, details in bug reports
01:16 partner sure you can
01:16 partner mkdir /foo/bar/brick1 and go make volume out of it
01:17 partner gluster will just use the disk available for that particular directory (which would be the amount of disk available on your root)
01:18 clusterflustered is there a way to limit your brick size?  say at 1.5 TB?
01:19 clusterflustered or i could just make a quota on /foo/bar/brick1
01:19 partner i am not aware of any such way (except to really give it a disk of 1.5 TB somehow, partition, LVM logical disk, etc.)
01:20 plarsen joined #gluster
01:21 partner from the 3.2 documentation: http://gluster.org/community/documentation/in​dex.php/Gluster_3.2:_Managing_Directory_Quota
01:21 glusterbot <http://goo.gl/txwrM> (at gluster.org)
01:48 yinyin joined #gluster
02:19 JoeJulian clusterflustered: You could make a 1.5TB file and put a filesystem on it and mount it with loop.
02:50 cyberbootje joined #gluster
02:54 mohankumar joined #gluster
03:11 theron joined #gluster
03:32 jjnash left #gluster
03:38 glusterbot New news from newglusterbugs: [Bug 906966] Concurrent mkdir() system calls on the same directory can result in D-state hangs <http://goo.gl/vQ15o>
03:43 Humble joined #gluster
03:44 dustint joined #gluster
03:46 dustint joined #gluster
03:48 dustint joined #gluster
03:52 dustint joined #gluster
04:03 mohankumar joined #gluster
04:11 avati_ joined #gluster
04:18 delete joined #gluster
04:18 ben__ joined #gluster
04:23 delete hi, I'm setting up 2 bricks on 2 servers that also act as client, I tried replica
04:24 delete gluster volume create wwwdatavol replica 2 transport tcp 10.0.0.1:/www 10.0.0.2:/www
04:24 delete when I mount 10.0.0.1:/www on the 10.0.0.1 and the other brick on the other
04:24 delete looks like they don't sync
04:24 delete I can create files on each one and they don't propagate to the other
04:24 JoeJulian Where are you mounting the volume?
04:25 delete /mnt
04:25 delete just for testing
04:25 JoeJulian So when you write a file to /mnt (ie. touch /mnt/foo) you're saying that /www/foo isn't created on both bricks?
04:25 delete right
04:25 delete that is exactly the problem
04:26 JoeJulian Check you client log.
04:26 delete I have both server on the internet
04:26 delete I have opened the port 24007 just for the other server
04:26 JoeJulian @ports
04:26 glusterbot JoeJulian: glusterd's management port is 24007/tcp and 24008/tcp if you use rdma. Bricks (glusterfsd) use 24009 & up. (Deleted volumes do not reset this counter.) Additionally it will listen on 38465-38467/tcp for nfs, also 38468 for NLM since 3.3.0. NFS also depends on rpcbind/portmap on port 111.
04:26 JoeJulian It sounds like you didn't open enough ports.
04:27 delete nice
04:27 delete thanks
04:27 JoeJulian You're welcome
04:27 JoeJulian fyi, the client log would have shown the "connection refused"
04:41 lala joined #gluster
05:19 Humble joined #gluster
05:34 H__ joined #gluster
05:59 dustint joined #gluster
06:29 delete joined #gluster
07:03 melanor9 joined #gluster
07:26 ctria joined #gluster
07:35 ekuric joined #gluster
07:48 melanor9 joined #gluster
07:52 mohankumar joined #gluster
08:01 hateya joined #gluster
08:24 lala joined #gluster
08:36 tjikkun joined #gluster
08:36 tjikkun joined #gluster
08:38 lh joined #gluster
08:49 melanor9 joined #gluster
09:09 hateya joined #gluster
09:50 melanor9 joined #gluster
09:50 rm_ joined #gluster
09:51 rm_ good morning / good evening / good whatever everyone
09:57 melanor9 joined #gluster
10:00 rm_ i am new to cluster, and i am trying to figure out what i need to consider when estimating scaling and how to test my assumptions. we have a rather specific problem: our data is somewhere in the range of several hundred tbytes in files that are typically > several gig each, and we have 30-50 clients that need to get north of 4 gbytes/s (combined) out of this. a typical client will need a bandwidth of >500mbit when active. i was wondering how i would go about estim
10:00 rm_ & testing for this kind of setup on a smaller scale …
10:01 rm_ new to gluster, that is, damn you, auto correct!
10:04 bauruine joined #gluster
10:04 rm_ is this something i should ask on the mailing list, maybe?
10:22 mohankumar joined #gluster
11:02 the-me johnmark: now I foreget the nickname of the patch writer.. there are some open questions/recommends: http://bugs.debian.org/cgi-bi​n/bugreport.cgi?bug=698502#27
11:02 glusterbot <http://goo.gl/yQdGv> (at bugs.debian.org)
11:04 isomorphic joined #gluster
11:14 stigchri_ joined #gluster
11:26 samppah @geo-replication
11:26 glusterbot samppah: See the documentation at http://goo.gl/jFCTp
11:28 johnmark the-me: I think you mean kkeithley
11:28 johnmark kkeithley: ^^^
11:28 samppah is it possible to set how often geo-replication happens?
11:29 samppah i think i have seen such option but i'm unable to find it
11:29 the-me johnmark: ah thanks, I should write it down ;)
11:36 melanor9 joined #gluster
11:45 RicardoSSP joined #gluster
11:45 RicardoSSP joined #gluster
13:35 vpshastry joined #gluster
13:46 deepakcs joined #gluster
13:55 melanor9 joined #gluster
14:16 theron joined #gluster
15:13 melanor9 joined #gluster
15:17 ekuric left #gluster
15:35 dustint joined #gluster
15:41 melanor9 joined #gluster
16:06 cyberbootje joined #gluster
16:30 cyberbootje joined #gluster
16:30 ekuric joined #gluster
16:36 cyberbootje joined #gluster
16:47 vpshastry left #gluster
17:18 theron joined #gluster
17:41 melanor9 joined #gluster
17:56 hateya joined #gluster
18:04 cyberbootje joined #gluster
18:20 melanor9 joined #gluster
18:24 avati_ joined #gluster
18:43 root joined #gluster
18:43 root Hey guys;
18:43 root What is the difference between a RAM Rabbit Node (in a cluster) and a Disc Node (in a cluster)
18:43 Guest70938 and are 15K RPM 143.8 GB drives good choices
18:44 iamforty15k143gb What is the difference between a RAM Rabbit Node (in a cluster) and a Disc Node (in a cluster)
18:44 iamforty15k143gb and are 15K RPM 143.8 GB drives good choices
18:44 * iamforty15k143gb does not know how to use console irc that well
18:44 * iamforty15k143gb oh yes i remember that command!
18:44 * iamforty15k143gb lol.... any one here?
18:45 avati_ joined #gluster
18:45 * iamforty15k143gb also can i use GlusterFS as a storage disk on my rabbitmq disc nodes?
18:48 johndescs first, don't run IRC as root :D
18:49 iamforty15k143gb hahhah
18:49 iamforty15k143gb oh shit
18:49 iamforty15k143gb yes io will be rioght back
18:49 iamforty15k143gb :)
18:49 iamforty15k143gb q
18:50 johndescs fixed :D
18:53 ben__ I need a good VPS provider
18:54 nb-ben can anyone make a recommendation?
19:15 _br_ joined #gluster
19:19 isomorphic joined #gluster
19:21 _br_ joined #gluster
19:23 _br_ joined #gluster
19:29 elyograg joined #gluster
19:38 trmpet1 joined #gluster
19:47 JuanBre joined #gluster
20:07 sashko joined #gluster
20:29 unlocksmith joined #gluster
20:29 unlocksmith left #gluster
20:32 unlocksmith joined #gluster
20:33 unlocksmith joined #gluster
20:34 JuanBre joined #gluster
20:42 szopa joined #gluster
20:56 unlocksmith joined #gluster
21:09 ben__ joined #gluster
21:43 nueces joined #gluster
22:09 melanor9 joined #gluster
22:11 glusterbot New news from newglusterbugs: [Bug 905933] GlusterFS 3.3.1: NFS Too many levels of symbolic links/duplicate cookie <http://goo.gl/YA2vM>
22:16 lh joined #gluster
22:16 lh joined #gluster
22:45 JuanBre joined #gluster
22:57 badone joined #gluster
22:59 daMaestro joined #gluster
23:05 Chinorro joined #gluster
23:20 sashko joined #gluster
23:26 clag_ joined #gluster
23:35 JuanBre joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary