Camelia, the Perl 6 bug

IRC log for #gluster, 2013-08-13

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:30 social a2_: https://bugzilla.redhat.com/show_bug.cgi?id=996324
00:30 glusterbot <http://goo.gl/0ZqiYO> (at bugzilla.redhat.com)
00:31 glusterbot Bug 996324: unspecified, unspecified, ---, kkeithle, NEW , possible fdleak on unlink
00:38 bala joined #gluster
00:41 a2_ social, you need http://review.gluster.org/5493
00:41 glusterbot Title: Gerrit Code Review (at review.gluster.org)
00:42 a2_ took a bit to hunt it down
00:42 a2_ social, in the mean time disable open-behind till you get a fix for it
00:43 a2_ you can close the bug as a duplicate of 991622
00:44 social thanks a lot :)
00:44 hchiramm_ joined #gluster
00:45 social I should just backport the fix, we already backport  md-cache: fix xattr caching code in getxattr ,)
00:46 a2_ ok
00:48 eryc joined #gluster
00:48 eryc joined #gluster
01:01 hchiramm_ joined #gluster
01:02 chirino joined #gluster
01:08 y4m4_ joined #gluster
01:08 a2_ joined #gluster
01:11 Durzo joined #gluster
01:13 inevity joined #gluster
01:16 lpabon joined #gluster
01:22 asias joined #gluster
01:35 recidive joined #gluster
01:43 awheeler joined #gluster
01:44 awheele__ joined #gluster
01:45 inevity2 joined #gluster
01:56 hchiramm_ joined #gluster
02:07 bala joined #gluster
02:13 hchiramm_ joined #gluster
02:20 inevity joined #gluster
02:33 hchiramm_ joined #gluster
02:35 Technicool joined #gluster
02:46 harish joined #gluster
02:49 inevity joined #gluster
02:53 bharata joined #gluster
02:58 hchiramm_ joined #gluster
02:59 kanagaraj joined #gluster
03:02 TNT joined #gluster
03:05 glusterbot New news from resolvedglusterbugs: [Bug 993571] glusterfsd init file missing in 3.4.0-7.el6.x86_64 rpm <http://goo.gl/Ygucgl>
03:10 aravindavk joined #gluster
03:15 awheeler joined #gluster
03:16 Guest86515 Noob looking to deploy replicated gluster storage... CentOS or Debian? Thoughts?
03:16 hchiramm_ joined #gluster
03:38 _pol joined #gluster
03:51 mdjunaid joined #gluster
03:56 zaitcev joined #gluster
04:02 mohankumar joined #gluster
04:28 sac`away joined #gluster
04:28 sac joined #gluster
04:28 Humble joined #gluster
04:28 RameshN joined #gluster
04:28 meghanam joined #gluster
04:29 sgowda joined #gluster
04:29 hagarth joined #gluster
04:30 sjoeboo joined #gluster
04:33 ngoswami joined #gluster
04:40 CheRi joined #gluster
04:42 ppai joined #gluster
04:45 kshlm joined #gluster
04:47 sgowda joined #gluster
04:50 hagarth joined #gluster
04:53 shylesh joined #gluster
04:53 itisravi joined #gluster
04:55 shruti joined #gluster
05:04 vijaykumar joined #gluster
05:11 Guest86515 Noob looking to deploy replicated gluster storage... CentOS or Debian? Thoughts?
05:18 ababu joined #gluster
05:39 psharma joined #gluster
05:40 rgustafs joined #gluster
05:42 deepakcs joined #gluster
05:45 mdjunaid joined #gluster
05:48 a2 joined #gluster
05:53 y4m4 joined #gluster
05:54 bulde joined #gluster
06:05 vshankar joined #gluster
06:05 raghu joined #gluster
06:10 ndarshan joined #gluster
06:11 rastar joined #gluster
06:13 lalatenduM joined #gluster
06:15 mohankumar sgowda: ping
06:19 jtux joined #gluster
06:21 sgowda mohankumar: morning
06:22 mohankumar sgowda: in our gluster meetup meeting you mentioned few patches are needed for live migration of VMs on a gluster mount
06:22 mohankumar could you please point me to them?
06:23 sgowda mohankumar: what build are you using?
06:24 mohankumar sgowda: i use git, but not rebased recently
06:24 mohankumar recent commit is bda60de187aadc885bbc705ccb9317f680f4b9d3 (June 17, approx 2 months old)
06:29 vimal joined #gluster
06:33 mohankumar sgowda: rebasing to latest master should fix that issue?
06:34 hagarth mohankumar: rebasing to the latest would be better
06:35 a2_ joined #gluster
06:35 mohankumar hagarth: let me try rebasing and try live migration of a VM
06:37 ntt_ joined #gluster
06:38 ntt_ Hi. How can i mount glusterfs from windows?
06:41 andreask joined #gluster
06:45 kanagaraj_ joined #gluster
06:46 satheesh1 joined #gluster
06:47 ngoswami joined #gluster
06:49 kanagaraj joined #gluster
06:55 mooperd joined #gluster
06:55 guigui1 joined #gluster
07:00 ekuric joined #gluster
07:00 jtux joined #gluster
07:05 kanagaraj_ joined #gluster
07:05 nshaikh joined #gluster
07:07 eseyman joined #gluster
07:12 ricky-ticky joined #gluster
07:15 kanagaraj__ joined #gluster
07:19 ntt_ Hi. How can i mount glusterfs from windows? Have I to run a samba server that mount glusterfs and then from windows I mount from samba server? Is this correct?
07:19 Durzo dont think theres a gluster windows client
07:20 ntt_ where is this gluster windows client?
07:20 psharma joined #gluster
07:21 y4m4 ntt_: samba vfs plugin is the right approach with GlusterFS 3.4.0
07:22 ntt_ y4m4: ok, so first i install a samba server that mounts glusterfs, then from windows i connect at this samba server. ok?
07:22 y4m4 ntt_: not really
07:22 dusmant joined #gluster
07:23 ntt_ y4m4: why?
07:25 y4m4 ntt_: install new samba version 3.6.9 which comes with vfs_gluster plugin
07:25 y4m4 ntt_: that should be used for best performance
07:25 y4m4 ntt_: not just mount glusterfs
07:26 y4m4 ntt_: if you are running 3.3.x then yes
07:26 y4m4 you have to mount it
07:26 ntt_ I'm using glusterfs 3.4
07:27 y4m4 ntt_: then you have to perhaps get the new patches from Samba project
07:27 y4m4 for vfs plugin
07:27 ntt_ ok. y4m4: thank you
07:27 kanagaraj joined #gluster
07:32 ntt_ y4m4: there is a documentation/guide/tutorial for vfs_gluster plugin?
07:34 y4m4 ntt_: release notes has some bits of information - https://github.com/gluster/glusterfs/blo​b/release-3.4/doc/release-notes/3.4.0.md
07:34 glusterbot <http://goo.gl/AqqsC> (at github.com)
07:37 y4m4 ntt_: https://access.redhat.com/site/documentatio​n/en-US/Red_Hat_Storage/2.1-RC/html-single/​Administration_Guide/index.html#sect-Admini​stration_Guide-GlusterFS_Client-CIFS-Manual
07:37 glusterbot <http://goo.gl/yT0NRs> (at access.redhat.com)
07:38 y4m4 ntt_: https://access.redhat.com/site/documentation​/en-US/Red_Hat_Storage/2.1-RC/html-single/Ad​ministration_Guide/index.html#sect-Administr​ation_Guide-GlusterFS_Client-CIFS-Automatic
07:38 glusterbot <http://goo.gl/AfVKl3> (at access.redhat.com)
07:38 y4m4 ntt_: hope that's accessible to you?
07:38 ntt_ ys
07:38 ntt_ yes
07:42 y4m4 ntt_: these are two patches which you need for samba "http://git.samba.org/?p=samba.git;a=commit;​h=0b8b6fdc96f59895536d16de43a1494c5eef5c67"
07:42 glusterbot <http://goo.gl/iBSYdl> (at git.samba.org)
07:42 y4m4 ntt_: "http://git.samba.org/?p=samba.git;a=commit;​h=6c49f90965327a7f70d24fecdb7529f3f78fc9e4"
07:42 glusterbot <http://goo.gl/eDCr97> (at git.samba.org)
07:43 ntt_ I have a 404 unknown commit object
07:44 nightwalk joined #gluster
07:45 y4m4 ntt_: well you have to copy the whole URL it works :-)
07:45 ntt_ sorry
07:54 johnf joined #gluster
07:54 ujjain joined #gluster
07:56 kanagaraj_ joined #gluster
07:56 johnf I currently have two gluster servers set up where the hostnames are set up as gluster-01 and gluster-02. Due to some new clients in a different domain I need the gluster servers to represent themselves as gluster-01.domain.com etc
07:56 johnf Can I simply stop gluster on both hosts, edit all the files in /var/lib/gluster and then start them back up or is there a better way?
07:59 y4m4 johnf: much easier way would be to remove /var/lib/glusterd and re-probe them with FQDN's
07:59 y4m4 and recreate the configs
07:59 y4m4 johnf: unless the topology of bricks are same nothing is different from GlusterFs stand-point
08:00 hagarth @channelstats
08:00 glusterbot hagarth: On #gluster there have been 168091 messages, containing 7120352 characters, 1189374 words, 4764 smileys, and 633 frowns; 1046 of those messages were ACTIONs. There have been 64351 joins, 2012 parts, 62323 quits, 21 kicks, 164 mode changes, and 7 topic changes. There are currently 225 users and the channel has peaked at 226 users.
08:00 puebele1 joined #gluster
08:02 johnf y4m4: thanks
08:03 johnf y4m4: would I need to stop gluster? Or could I do that live?
08:06 kanagaraj joined #gluster
08:16 kanagaraj joined #gluster
08:19 harish joined #gluster
08:30 nightwalk joined #gluster
08:35 harish joined #gluster
08:37 atrius joined #gluster
08:37 mdjunaid joined #gluster
08:57 tzero joined #gluster
09:08 RameshN joined #gluster
09:10 SteveCoo1ing Yo. Using this: http://download.gluster.org/pub/gluster/glust​erfs/3.4/LATEST/EPEL.repo/glusterfs-epel.repo file does not work in RHEL5, since there is no "noarch" directory in http://download.gluster.org/pub/gluster​/glusterfs/3.4/LATEST/EPEL.repo/epel-5/ ... any tips?
09:10 glusterbot <http://goo.gl/kq7YS1> (at download.gluster.org)
09:11 SteveCoo1ing is there another "official" yum repo that always gives you the latest 3.4.x ?
09:24 psharma joined #gluster
09:34 psharma joined #gluster
09:38 piotrektt joined #gluster
09:38 piotrektt joined #gluster
09:39 spider_fingers joined #gluster
09:55 _br_ joined #gluster
10:24 kkeithley1 joined #gluster
10:30 RameshN joined #gluster
10:32 jmeeuwen_ joined #gluster
10:35 misuzu_ joined #gluster
10:37 edward1 joined #gluster
10:39 jporterfield_ joined #gluster
10:39 atrius_ joined #gluster
10:40 sgowda joined #gluster
10:41 [o__o] joined #gluster
10:43 badone joined #gluster
10:43 Guest86515 joined #gluster
10:44 jtux joined #gluster
10:45 jiffe98 joined #gluster
10:45 cyberbootje joined #gluster
10:46 rastar joined #gluster
10:46 ricky-ticky joined #gluster
10:46 Guest86515 joined #gluster
10:46 jcsp joined #gluster
10:46 X3NQ joined #gluster
10:46 ThatGraemeGuy joined #gluster
10:46 mjrosenb joined #gluster
10:46 codex joined #gluster
10:46 risibusy joined #gluster
10:46 abyss^_ joined #gluster
10:46 matiz joined #gluster
10:47 jbrooks joined #gluster
10:47 ricky-ticky1 joined #gluster
10:48 guigui1 joined #gluster
10:53 rjoseph joined #gluster
11:01 psharma joined #gluster
11:12 ppai joined #gluster
11:14 jclift joined #gluster
11:18 andreask joined #gluster
11:19 CheRi joined #gluster
11:24 hagarth joined #gluster
11:26 jmeeuwen joined #gluster
11:27 sprachgenerator joined #gluster
11:28 lpabon joined #gluster
11:35 duerF joined #gluster
11:39 nshaikh left #gluster
11:45 hagarth joined #gluster
11:48 nshaikh joined #gluster
11:49 rgustafs joined #gluster
11:51 ngoswami joined #gluster
12:11 CheRi joined #gluster
12:12 B21956 joined #gluster
12:15 chirino_m joined #gluster
12:15 guigui1 joined #gluster
12:29 awheeler joined #gluster
12:29 karthik joined #gluster
12:29 recidive joined #gluster
12:29 bennyturns joined #gluster
12:30 spresser joined #gluster
12:37 glusterbot New news from resolvedglusterbugs: [Bug 980548] intermittent failures of tests/bugs/bug-888174.t <http://goo.gl/lZmRrz>
12:44 awheeler joined #gluster
12:44 shylesh joined #gluster
12:45 awheeler joined #gluster
12:53 zoldar joined #gluster
12:54 kaushal_ joined #gluster
12:54 zoldar Hi. In order to (for example) make io-cache translator work on client-side - do I have to create a custom volume configuration on client or can it be somehow configured on cli level?
13:05 psharma joined #gluster
13:07 bulde joined #gluster
13:13 sgowda joined #gluster
13:19 hagarth joined #gluster
13:20 mdjunaid joined #gluster
13:24 navid__ joined #gluster
13:25 dewey joined #gluster
13:26 ngoswami_ joined #gluster
13:26 psharma joined #gluster
13:30 Humble joined #gluster
13:37 glusterbot New news from resolvedglusterbugs: [Bug 985406] Cannot change file permissions from windows client <http://goo.gl/kRe7w>
13:37 bulde joined #gluster
13:42 guigui1 joined #gluster
13:42 failshell joined #gluster
13:45 puebele joined #gluster
13:46 failshell joined #gluster
13:52 aliguori joined #gluster
13:52 Humble joined #gluster
14:05 puebele joined #gluster
14:08 Excolo joined #gluster
14:09 verdurin joined #gluster
14:12 Excolo Hi, I was hoping that someone with some experience can help me with a few questions, to hopefully overcome a bit of a nightmare im in
14:12 [o__o] joined #gluster
14:13 Excolo im not really that familiar with gluster, and anyone willing to listen would be a great help
14:17 [o__o] joined #gluster
14:19 bugs_ joined #gluster
14:21 aliguori joined #gluster
14:22 kanagaraj joined #gluster
14:22 Excolo joined #gluster
14:23 __Bryan__ joined #gluster
14:25 Iodun joined #gluster
14:27 plarsen joined #gluster
14:27 Iodun if i create a replicated glusterfs volume, which component handles the replication? the client or one of the pool servers?
14:29 kaptk2 joined #gluster
14:29 sprachgenerator joined #gluster
14:31 hcd joined #gluster
14:31 hcd joined #gluster
14:32 Iodun_ joined #gluster
14:34 vimal joined #gluster
14:38 Excolo If I wanted to replace one gluster brick with a new server... but they guy who set it up used the host name for the server (and I cant swap the host names).... how would I go about that? Essentially replace a brick with a new name
14:39 Technicool joined #gluster
14:42 Iodun_ Excolo: I dont have any experience with glusterfs, but i think there was something in Theo
14:42 Iodun_ the PDF Manual
14:43 Iodun_ about your Problem
14:44 Iodun_ http://www.gluster.org/wp-content/up​loads/2012/05/Gluster_File_System-3.​3.0-Administration_Guide-en-US.pdf
14:44 glusterbot <http://goo.gl/bzF5B> (at www.gluster.org)
14:46 Excolo thanks, im looking at that. My origional thought was something like "gluster peer detach mybrick" and then create the brick on the new server (since it is being replicated across to another data center) but i was told that wouldnt work
14:47 Iodun joined #gluster
14:50 lpabon joined #gluster
14:51 Iodun joined #gluster
14:54 Iodun_ joined #gluster
14:55 Iodun_ Excolo: Page 42 in the Admin Guide
14:56 Excolo I love you right now lodun
14:57 Excolo this looks exactly like what i need
14:57 Iodun joined #gluster
14:59 Gilbs2 joined #gluster
15:03 partner hey, i was wondering what would be the recommended way of recovering from a situation where 3x2 dist-repl setup got few bricks filled up completely (0 free) ? 2 hosts with 3 bricks, the host #2 is running again fine and disk have free space after cleanup but #1 is stuck having pretty much all its bricks 0-2 MB free
15:04 partner i know how to trigger self-heal to fix the _missing_ files but heh i have loads of extra files at my bricks obviously..
15:04 Iodun joined #gluster
15:07 spider_fingers left #gluster
15:08 jebba joined #gluster
15:10 Iodun lots of questions here, but no one to answer :/
15:11 Iodun we need an glusterfs expert :P
15:12 partner too early, dudes are just woking up and heading to work shortly
15:12 Excolo welp... lodun... i appreciate your answer. But apparently the guy who installed it says "that will not work, and even if it does it would take like a week and break it in production until complete".... he's being very vague
15:12 Iodun joined #gluster
15:13 partner Excolo: so umm you want to replace a brick? there is a command for that..
15:14 Excolo yes, as ive been informed "gluster volume replace-brick"... but the guy who installed it says "it wont work"... that he tried it before and it crashed gluster... but hes not really giving me details
15:14 partner i've used it succesfully, i moved a brick away from a server to a new server as there was something i needed to do with the original server
15:14 [o__o] joined #gluster
15:15 partner at least with 3.3.1
15:15 Excolo also, i cant remember if ive mentioned, we're running gluster 3.2.5
15:15 partner oh, no experience on that one so "the guy" might be right on that one
15:16 partner so umm did he say adding new brick and then removing the old will also fail?
15:17 Excolo to be honest, gluster seems like an awesome product, but i really think its a bad fit for us that was forced into place... and now im stuck with the pieces
15:17 Iodun partner: do you know whether the glusterfs client spreads the data himself over all the bricks or whether that happens on the server side?
15:18 Technicool joined #gluster
15:18 nateB88 joined #gluster
15:19 bennyturns joined #gluster
15:21 Iodun I know that ceph sends all data to a randomly choosen primary storage backend and replication is handled there... i would like to know how glzsterfs handles that
15:23 nateB88 hi all, I'm in a bit of a tight spot and could use a quick pointer. I have been running 3.3.0 for almost a year and until yesterday have had no major problems. I probably am mis-using a few tools … but… My XCP server pool is using a gluster NFS mount to store all the VM disk images. Until yesterday it was good. Yesterday (and now today again) i get a massive CPU consumption on the NFS host in a single glusterfsd process and the NFS clients start to
15:23 nateB88 timeout. Any thoughts on how to A) properly restore NFS, and B) how to prevent this from happenning in the future… Thanks in advance...
15:24 sprachgenerator joined #gluster
15:26 semiosis Iodun: the glusterfs native fuse client connects directly to all bricks after retrieving volume information from the ,,(mount server)
15:26 glusterbot Iodun: The server specified is only used to retrieve the client volume definition. Once connected, the client connects to all the servers in the volume. See also @rrnds
15:28 semiosis Excolo: see ,,(replace)
15:28 glusterbot Excolo: Useful links for replacing a failed server... if replacement server has different hostname: http://goo.gl/nIS6z ... or if replacement server has same hostname:
15:28 glusterbot http://goo.gl/rem8L
15:32 Excolo im assuming with that, i should take the "failed" (brick im replacing) out of gluster before starting (to replicate the conditions of what is being done there?)
15:34 semiosis assume we put all the necessary steps in the document
15:34 semiosis you're trying to replace a server, keeping the hostname the same?
15:34 failshel_ joined #gluster
15:35 Excolo no, changing the hostname
15:35 semiosis ohh
15:37 Iodun so, if my volume replicates data on 4 bricks the client would send the exactly same package 4 times... one to each of the bricks servers?
15:38 semiosis Iodun: yes
15:38 semiosis maybe
15:38 semiosis i dont know if the packets will be exactly the same on the network
15:39 semiosis but the data will be sent to all replicas
15:39 Iodun thats not much efficient... usually the client - server bandwith is smaller than server - server bandwidth
15:40 semiosis thats just like, your opinion, man
15:40 semiosis ;)
15:42 Iodun_ joined #gluster
15:44 Iodun_ well... i am open for different concepts... but what are the advantages doing it the glusterfs way?
15:44 Iodun joined #gluster
15:45 semiosis Iodun: easier to scale, no SPOF
15:52 failshell joined #gluster
15:57 hagarth joined #gluster
16:05 navid__ left #gluster
16:07 _pol joined #gluster
16:09 sjoeboo joined #gluster
16:12 lpabon joined #gluster
16:18 jdarcy joined #gluster
16:21 piotrektt joined #gluster
16:21 piotrektt joined #gluster
16:26 Gilbs joined #gluster
16:28 hagarth lubko: thanks for the additions on backport wishlist!
16:36 mooperd joined #gluster
16:39 Mo__ joined #gluster
16:56 JoeJulian jdarcy: My biggest concern with journal-based replication comes from DRBD. They do a journal, too, using a separate cache partition. Once that journal is full, I experienced cascading failure. When the journal can be loaded faster than it's unloaded this is the problem I experienced.
16:58 JoeJulian jdarcy: So what I'm looking to be convinced is how journaled replication maintains full redundancy and how it will prevent the journal input from exceeding the capacity of the journal output.
17:36 jmeeuwen joined #gluster
17:44 jruggiero left #gluster
17:52 partner 18:03 < partner> hey, i was wondering what would be the recommended way of recovering from a situation where 3x2 dist-repl setup got few bricks filled up completely (0 free) ? 2 hosts with 3 bricks, the host #2 is running again fine and disk have free space after cleanup but #1 is stuck having pretty much all its bricks 0-2 MB free
17:52 partner 18:04 < partner> i know how to trigger self-heal to fix the _missing_ files but heh i have loads of extra files at my bricks obviously..
17:53 partner sorry for repeating, i have been quite quiet in here ;)
17:54 JoeJulian self-heal should do that.
17:54 JoeJulian gluster volume heal $vol
17:55 sjoeboo joined #gluster
18:07 partner i guess it needs a harder kick.. #1 seems to be running constant 46 load.. been like that for a week+ i guess
18:08 partner 0-rv0-client-2: disconnected
18:09 partner oh, i seemed to kick it a bit on 9th after which such errors and "no active sink" stuff 310k+ lines
18:10 partner [afr-self-heald.c:409:_crawl_proceed] 0-rv0-replicate-2: Stopping crawl for rv0-client-4 , subvol went down
18:12 partner hmph, rebooting the whole box
18:14 jcsp joined #gluster
18:16 xdexter joined #gluster
18:17 xdexter Hello, i have this error: /mnt/images or a prefix of it is already part of a volume, someone help me ?
18:17 glusterbot xdexter: To clear that error, follow the instructions at http://goo.gl/YUzrh or see this bug http://goo.gl/YZi8Y
18:19 xdexter thanks
18:20 _pol joined #gluster
18:23 _pol joined #gluster
18:26 _pol joined #gluster
18:27 semiosis glusterbot: thanks
18:27 glusterbot semiosis: you're welcome
18:28 jbrooks joined #gluster
18:28 jcsp joined #gluster
18:34 lpabon joined #gluster
18:36 bulde joined #gluster
18:56 partner right, the host never came up..
18:56 partner or rather i'd guess it never went down
18:57 Gilbs left #gluster
18:57 jag3773 joined #gluster
18:58 jurrien_1 joined #gluster
19:24 partner i guess it was just stuck somehow, restarting glusterfsd didn't help, reboot fixed it and now i see some free space around the bricks
19:25 JoeJulian excellent
19:25 JonnyNomad joined #gluster
19:42 partner it was my testing setup so i wasn't worried about it that much. however, now as it happened i was curious on how to recover properly in case it ever happens with production systems
19:43 partner happy to see the fd bug was fixed during my holiday (rebalance leaving files handles open)
19:43 partner however someone decided to drop squeeze packages away from it.. but i'm sure 3.3.1 clients work perfectly with 3.3.2 servers?
19:51 _Bryan_ @terms
19:51 glusterbot _Bryan_: I do not know about 'terms', but I do know about these similar topics: 'time'
19:51 _Bryan_ err....what is the macro for brick. server, etc definitions?
19:51 JoeJulian @glossary
19:51 glusterbot JoeJulian: A "server" hosts "bricks" (ie. server1:/foo) which belong to a "volume"  which is accessed from a "client"  . The "master" geosynchronizes a "volume" to a "slave" (ie. remote1:/data/foo).
19:51 _Bryan_ @definitions
19:51 _Bryan_ Thanks..
19:54 zaitcev joined #gluster
20:04 duerF joined #gluster
20:07 semiosis @time
20:07 glusterbot semiosis: Make sure your date and time are in sync. It helps in split-brain situations. At the very least, it'll help you if you even need to do a manual recovery.
20:09 voronaam joined #gluster
20:11 voronaam Hi! I am looking at Gluster as a DFS to share bioinformatics data among the R&D people workstations. I am going to do that without any server, by installing Gluster on their workstations. Do you think it is feasible?
20:16 JoeJulian yes
20:17 JoeJulian ... but then again, there's not much that I find unfeasible... I generally can do very difficult things within an hour. The impossible takes me a little longer.
20:21 voronaam Considering the first time I heard of Gluster was yesterday, I am going to spend a little more time doing that
20:22 semiosis voronaam: usually people set up a storage cluster & access that from client machines.  it's possible to run clients on servers, but there's complications with doing that
20:22 semiosis try out what you have in mind and see for yourself if it will serve your needs
20:22 semiosis if you have trouble, let us know
20:23 voronaam Is there a document describing how "gluster volume rebalance-brick" works?
20:23 voronaam (I want to start with few workstations in a cluster, and grow & rebalance it gradually)
20:23 semiosis be aware that rebalance is expensive
20:23 semiosis bbiab
20:25 andreask joined #gluster
20:28 voronaam All the docs on the site say that 3.4 is "nearing release", but it appears to be released. Should I use 3.3 instead?
20:29 JoeJulian No, use 3.4
20:29 voronaam Ok.
20:29 voronaam I think somebody should update http://gluster.org/community/document​ation/index.php/Main_Page#GlusterFS_3.4 :)
20:29 glusterbot <http://goo.gl/63QM3a> (at gluster.org)
20:38 nightwalk joined #gluster
20:42 _pol joined #gluster
20:53 jmalm joined #gluster
20:53 dbruhn joined #gluster
20:55 _pol_ joined #gluster
21:00 voronaam Quick question: "Failed to find brick directory /media/data/gluster for volume gvrd. Reason : Not a directory" Gluster does not like simlinks do directories, correct?
21:01 JoeJulian Bricks should be directories, not symlinks.
21:02 jmalm Hey I am having an issue with some split brain stuff.  A file gets an i/o error, when running the SB commands I am getting No such file, trying to delete the file from the filesystem continues to give an i/o error, and trying to access the file in the brick returns no such file.
21:03 jmalm No such file errors AFTER trying the split brain commands.
21:03 JoeJulian jmalm: Which version?
21:03 jmalm 3.3.1
21:04 cfeller joined #gluster
21:04 JoeJulian Try remounting. iirc, that's a known bug in 3.3.1. Should be fixed in 3.3.2/3.4.0
21:04 MugginsM joined #gluster
21:05 tqrst- joined #gluster
21:05 voronaam Why does not "gradle volume delete" clean up brick attributes on brick folders?
21:07 JoeJulian Besides the obvious typo... because there is a bunch of metadata that would be erroneous if that brick were to be used to create a new volume or, even worse, if that brick were added to another existing volume.
21:08 JoeJulian The only other option would be to erase it all and the philosophy is to preserve data whenever possible.
21:09 voronaam Makes sense, thank you
21:13 tjikkun_work joined #gluster
21:14 jmalm JoeJulian after remounting it is still getting no such file in the brick, and an i/o error in the filesystem.
21:15 JoeJulian Check your client log. Use fpaste.org if you need to have another pair of eyes take a look.
21:35 tjikkun_work joined #gluster
21:36 mooperd joined #gluster
21:48 jag3773 joined #gluster
21:49 jag3773 joined #gluster
21:51 fidevo joined #gluster
22:01 Iodun joined #gluster
22:07 mrDougal joined #gluster
22:07 mrDougal hiya!
22:08 Iodun hi
22:08 glusterbot Iodun: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
22:09 mrDougal just an quick one, just enabled the worm feature on one of my volumes that already had data stored and it seems i can still delete the files. we are running a 4 node distributed replica and is used "gluster volume set volname features.worm enable"
22:09 mrDougal no errors. . .
22:10 plarsen joined #gluster
22:11 nightwalk joined #gluster
22:13 JoeJulian I haven't tried that myself. Have you tried remounting?
22:14 mrDougal yeah, needed to unmount to make it worm.
22:15 mrDougal just trying on a new volume to see what happens
22:15 JoeJulian odd, considering that's implemented on the brick graph...
22:18 mrDougal yeah this is a bit odd. new volume setup as follows "volume create worm1 replica 2 arch01:/storage/exp5 arch02:/storage/exp5 arch03:/storage/exp5 arch04:/storage/exp5"
22:18 mrDougal then "volume set worm1 features.worm enable"
22:19 mrDougal "volume start worm1"
22:19 mrDougal all successful
22:19 mrDougal mounted via "mount -t glusterfs arch01.somedomain.com:/worm1 /storage/worm1"
22:20 mrDougal now its comeing back as a Read-Only filesystem :-/
22:21 _pol joined #gluster
22:25 fidevo joined #gluster
22:31 tjikkun_work joined #gluster
22:35 JoeJulian I'm not finding any integration tests for worm, unless I'm just looking in the wrong place...
22:35 Bluefoxicy joined #gluster
22:36 JoeJulian worm seems to be a mount option as well.
22:37 mrDougal ahh, ok. Any idea what the option should be or where i can find the doc's on it?
22:37 jebba joined #gluster
22:39 JoeJulian The mount option is "worm". The documentation I found it in was, "vim /sbin/mount.glusterfs" ;)
22:40 voronaam How exactly ACL works on Gluster? For now I see that owner is identified by UID (expected), which led to a mess in my case, since user's UIDs do not match on different workstations
22:40 mrDougal nice one, thanks. Will take a look
22:42 JoeJulian voronaam: Commonly a network identification system such as ldap is used to prevent that problem.
22:42 voronaam sure. Ok, I'll figure something out
22:44 _pol_ joined #gluster
22:56 mrDougal JoeJulian that option worked perfectly. Thanks very much for that!
23:10 nueces joined #gluster
23:20 voronaam Is there an option like NFS'es "anonuid" on the FUSE driver?
23:23 voronaam nevermind, my iozone testing just completed and NFS access is so much faster, that I have to use it.
23:33 awheeler joined #gluster
23:40 ultrabizweb joined #gluster
23:43 voronaam How to apply a translator to a volume?
23:43 voronaam I want to try fixed-uid from this one: http://gluster.org/community/documentati​on/index.php/Translators/features/filter
23:43 glusterbot <http://goo.gl/b5BQ0> (at gluster.org)
23:45 badone joined #gluster
23:46 voronaam I found a way to edit the volfile directly, but it is not a recommended way, is not it?

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary