Camelia, the Perl 6 bug

IRC log for #gluster, 2013-07-08

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:48 vpshastry joined #gluster
00:56 yinyin joined #gluster
01:07 asriram joined #gluster
01:26 harish joined #gluster
01:36 kevein joined #gluster
01:59 vpshastry joined #gluster
02:03 vpshastry1 joined #gluster
02:05 hagarth joined #gluster
02:10 harish joined #gluster
02:33 vpshastry joined #gluster
02:37 jag3773 joined #gluster
02:55 jag3773 joined #gluster
02:59 vpshastry1 joined #gluster
03:17 jimlin joined #gluster
03:22 bharata joined #gluster
03:23 jimlin_ joined #gluster
03:25 mohankumar joined #gluster
03:31 jimlin joined #gluster
03:54 _pol joined #gluster
04:16 hagarth joined #gluster
04:23 CheRi joined #gluster
04:31 jimlin joined #gluster
04:41 Humble joined #gluster
04:47 fidevo joined #gluster
05:11 nightwalk joined #gluster
05:12 anand joined #gluster
05:14 asriram joined #gluster
05:16 bulde joined #gluster
05:19 deepakcs joined #gluster
05:23 shylesh joined #gluster
05:28 vpshastry joined #gluster
05:28 vpshastry left #gluster
05:30 sgowda joined #gluster
05:32 raghu joined #gluster
05:42 satheesh joined #gluster
05:48 lalatenduM joined #gluster
06:00 psharma joined #gluster
06:02 shireesh joined #gluster
06:07 satheesh joined #gluster
06:09 an joined #gluster
06:16 rastar joined #gluster
06:23 ramkrsna joined #gluster
06:23 ramkrsna joined #gluster
06:26 jtux joined #gluster
06:31 ricky-ticky joined #gluster
06:32 vshankar joined #gluster
06:34 jimlin joined #gluster
06:41 mooperd joined #gluster
06:44 ollivera joined #gluster
06:47 puebele joined #gluster
06:53 ramkrsna joined #gluster
06:56 ctria joined #gluster
07:05 an joined #gluster
07:06 puebele1 joined #gluster
07:07 jtux joined #gluster
07:11 darshan joined #gluster
07:13 ngoswami joined #gluster
07:14 FilipeMaia joined #gluster
07:19 hybrid512 joined #gluster
07:19 satheesh joined #gluster
07:22 ujjain joined #gluster
07:22 hybrid512 joined #gluster
07:24 andreask joined #gluster
07:29 Koma left #gluster
07:34 fidevo joined #gluster
07:34 dobber joined #gluster
07:40 FilipeMaia joined #gluster
07:41 kevein joined #gluster
07:59 harish joined #gluster
08:10 tjikkun_work joined #gluster
08:14 mooperd joined #gluster
08:17 darshan joined #gluster
08:25 tjikkun_work joined #gluster
08:28 FilipeMaia joined #gluster
08:32 FilipeMaia joined #gluster
08:34 glusterbot New news from newglusterbugs: [Bug 981456] RFE: Please create an "initial offline bulk load" tool for data, for GlusterFS <http://goo.gl/d1AFm>
08:42 jtux joined #gluster
08:45 an joined #gluster
09:07 vshankar joined #gluster
09:13 deepakcs joined #gluster
09:14 harish joined #gluster
09:29 ShaharL joined #gluster
09:40 vimal joined #gluster
09:47 vshankar joined #gluster
09:48 vpshastry joined #gluster
09:53 dobber_ joined #gluster
09:53 spider_fingers joined #gluster
09:54 asriram1 joined #gluster
10:03 jhaagmans joined #gluster
10:04 shylesh joined #gluster
10:05 jhaagmans joined #gluster
10:06 vshankar joined #gluster
10:17 jhaagmans Hello, I have a question. We've been using nfs to mount our webroot to our loadbalanced webnodes. However, this is currently a single point of failure. I've been looking at a few options, including using a second nfs which would act as a failover using HAproxy, but I'm wondering whether Gluster would be a better fit for our situation. I've read som
10:17 jhaagmans e documents and use cases, but most of these tend to focus on performance. While performance is important to us, reliability is our main priority.
10:23 jhaagmans So my question is: will Gluster be able to replace our NFS and function as a redundant mounted storage facility?
10:24 samppah jhaagmans: it's possible to replace nfs with glusterfs or even use nfs with gluster..
10:24 samppah however, performance is somekind of problem indeed
10:24 samppah @php
10:24 glusterbot samppah: php calls the stat() system call for every include. This triggers a self-heal check which makes most php software slow as they include hundreds of small files. See http://goo.gl/uDFgg for details.
10:25 an joined #gluster
10:28 ccha I statedump my volume and I see alot locks.inode
10:28 ccha but nothing to heal
10:29 ccha [xlator.features.locks.MYVOL-locks.inode]
10:29 ccha path=/xml_history/histo-2013.07.08.tar
10:29 ccha mandatory=0
10:29 ccha that's all for each locks
10:29 shylesh joined #gluster
10:30 jhaagmans Thanks samppah. Is it advisable to place Gluster on top of a software RAID setup or would it be better to use the individual volumes as bricks and let Gluster take care of performance and redundancy?
10:31 puebele joined #gluster
10:32 samppah jhaagmans: personally i prefer raid as it boosts performance (raid10) and also helps detecting failed disks
10:34 jhaagmans Thanks again samppah, I'm going to try this on our test environment.
10:37 vshankar joined #gluster
11:10 csshankaravadive joined #gluster
11:11 timothy joined #gluster
11:13 csshankaravadive Need help on this issue
11:13 csshankaravadive http://www.gluster.org/pipermail/g​luster-users/2013-July/036373.html
11:13 glusterbot <http://goo.gl/02Pr3> (at www.gluster.org)
11:15 andreask csshankaravadive: selinux enabled?
11:15 andreask ... just a wild guess ...
11:16 csshankaravadive give me a minute
11:16 csshankaravadive will check
11:17 csshankaravadive its disabled
11:17 csshankaravadive do you think enabling would help
11:20 CheRi joined #gluster
11:25 edward1 joined #gluster
11:38 csshankaravadive but is it mandatory to enable selinux
11:39 csshankaravadive I guess amazon Linux does not have selinux enabled
11:39 csshankaravadive but it works fine there
11:40 vimal joined #gluster
11:42 X3NQ joined #gluster
11:42 andreask joined #gluster
11:47 guigui3 joined #gluster
11:52 saurabh joined #gluster
11:57 robos joined #gluster
12:03 CheRi joined #gluster
12:04 ngoswami joined #gluster
12:05 theron joined #gluster
12:17 anand joined #gluster
12:18 bulde joined #gluster
12:24 guigui3 joined #gluster
12:29 aliguori joined #gluster
12:37 vimal joined #gluster
12:46 jthorne joined #gluster
12:46 hagarth joined #gluster
12:49 bennyturns joined #gluster
12:49 balunasj joined #gluster
12:56 aknapp joined #gluster
12:57 mohankumar joined #gluster
13:02 pkoro joined #gluster
13:07 _pol joined #gluster
13:14 chirino joined #gluster
13:30 plarsen joined #gluster
13:33 chirino joined #gluster
13:40 VSpike left #gluster
13:40 failshell joined #gluster
13:41 _BuBU joined #gluster
13:43 rwheeler joined #gluster
13:47 bulde joined #gluster
13:48 zaitcev joined #gluster
13:50 ShaharL joined #gluster
13:57 bfoster joined #gluster
13:57 robos joined #gluster
13:58 chjohnst_work joined #gluster
14:01 shylesh joined #gluster
14:05 bugs_ joined #gluster
14:10 ShaharL joined #gluster
14:14 yinyin joined #gluster
14:15 vpshastry joined #gluster
14:21 atrius joined #gluster
14:26 guigui3 joined #gluster
14:35 bsaggy joined #gluster
14:36 soukihei joined #gluster
14:40 robos joined #gluster
15:01 _pol joined #gluster
15:05 chirino joined #gluster
15:12 jag3773 joined #gluster
15:15 madphoenix joined #gluster
15:16 plarsen joined #gluster
15:16 an joined #gluster
15:17 lpabon joined #gluster
15:17 madphoenix can anybody give me some tips on troubleshooting client-side performance problems with the fuse client?  i have a server that is basically hung right now, with 2 glusterfs processes pegging the CPUs.  When I attach strace to the two processes I see nothing, so presumably they are hung.  The bricks themselves aren't seeing any strange performance issues
15:17 rwheeler_ joined #gluster
15:19 Technicool joined #gluster
15:21 spider_fingers left #gluster
15:22 kedmison joined #gluster
15:23 semiosis madphoenix: glusterfs version?  distro?
15:24 madphoenix 3.3.1 from glusterfs-epel on centos 6.4
15:24 semiosis clients & servers all the same version?
15:24 madphoenix correct
15:24 semiosis anything in the client log file?
15:25 madphoenix yes - one moment let me pastebin it
15:26 semiosis ty
15:28 aknapp joined #gluster
15:29 madphoenix semiosis: http://pastebin.centos.org/3127/
15:29 glusterbot Title: #3127 CentOS Pastebin (at pastebin.centos.org)
15:29 madphoenix those messages are from friday afternoon when the problem was first noticed
15:29 madphoenix additionally, there is nothing new in the log since then
15:30 semiosis madphoenix: pretty clear your client can't reach the bricks
15:30 semiosis can you telnet from client machine to servers on port 24007?
15:31 semiosis also maybe ,,(pasteinfo) could help
15:31 glusterbot Please paste the output of "gluster volume info" to http://fpaste.org or http://dpaste.org then paste the link that's generated here.
15:31 vimal joined #gluster
15:31 wushudoin joined #gluster
15:32 madphoenix trying - unfortunately these processes are hanging the entire host right now
15:32 vpshastry left #gluster
15:33 semiosis pastebin brick log(s) please
15:33 bulde joined #gluster
15:35 vimal joined #gluster
15:36 kedmison @madphoenix: perhaps check the glusterfsd processes on host 'bigdata' that are handling bricks 1 thru 5 there.  I'm seeing a similar problem on my clients, and my glusterfsd processes are spinning, running at 100%CPU.  I wonder if you're in the same situation.
15:36 madphoenix kedmison: bigdata is actually the volume
15:36 _BuBU Hi
15:36 glusterbot _BuBU: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
15:36 madphoenix and semiosis: yes, I can telnet to all bricks
15:37 jclift_ joined #gluster
15:37 _BuBU is user_xattr a requirement for using glusterfs in replica mode ?
15:37 semiosis madphoenix: ok next test would be to make a new client mount point somewhere unimportant so you can see if it connects (or get a good clean log if it doesnt)
15:38 semiosis _BuBU: it is not required.  glusterfs uses trusted xattrs which are always enabled
15:38 madphoenix semiosis: i think at this point I'm going to have to reboot the host because I can't even work in the terminal anymore
15:38 semiosis madphoenix: sounds good
15:38 madphoenix unfortunately that probably will limit my ability to find the root cause of this
15:38 _BuBU In face I've a mountpoint /mnt/glusterfs/customers with severall subdirectory.. and want to create one volume per subdirectory
15:39 _BuBU but I get: "... or a prefix of it is already part of a volume" when creating the different volumes
15:39 glusterbot _BuBU: To clear that error, follow the instructions at http://goo.gl/YUzrh or see this bug http://goo.gl/YZi8Y
15:41 _BuBU so I can get rid of the user_xattr mount option right in my case ?
15:41 chirino joined #gluster
15:42 madphoenix kedmison: i'd be curious to hear more of what you're experiencing.  If a temporary loss of network connection to the bricks causes the FUSE client to hang the system, that seems like an important problem ...
15:43 _BuBU actually before I was using only one customers volume and mounting subdirs of it for each client with the nfs proto (the gluster nfs one)
15:43 _BuBU but I would like to go with gluster proto itself now
15:44 _BuBU but multi-tenant is not yet implemented I think ?
15:44 _BuBU at least not before 3.4 last time I read about it..
15:44 kedmison @madphoenix: sorry for jumping in and making assumptions on what you were seeing.  I'd be glad to describe it; I'm really struggling with this.
15:45 _BuBU So moving to a per customer volume bases right now..
15:46 kedmison @madphoenix: Initial configuration: 2-node Centos 6.4, Gluster 3.3.1.  2 clients (for now). I've added 2 more gluster nodes, and am trying to migrate data off one of the initial nodes so that I can re-build the RAID-layer.
15:47 kedmison @madphoenix: I am doing the recommended add-brick/remove-brick process for 3.3 to migrate the data, as the mailing lists are suggesting that remove-brick is deprecated.
15:48 madphoenix @kedmison: so you're also seeing the glusterfs processes pegging the CPUs and not doing much?
15:48 kedmison @madphoenix: the data rebalance has gone through about about 80% of the data on the brick I'm trying to remove.  Now all the glusterfsd processes on that node are pegged at 100% cpu.
15:48 jbrooks joined #gluster
15:49 Gualicho joined #gluster
15:49 madphoenix kedmison: hm, sounds like it might be a different issue, since you're having issues with the gluster back-end and in my case it was the FUSE client
15:49 kedmison @madphoenix: and any clients who are trying to mount that volume and/or read/write to it are hung.  the client logs are full of messages very similar to what you are seeing: e.g. can't stat, and then 'Transport endpoint not connected'.
15:50 madphoenix kedmison: are you still getting log output?  my client stopped logging on Friday afternoon when the problem was first noticed.  also when I attached strace to the gluster processes on the client, nothing was happening
15:50 semiosis kedmison: transport endpoint not connected is a very common error. that just means client lost connection to a brick.
15:51 unclean joined #gluster
15:51 semiosis kedmison: there are many many causes
15:51 semiosis kedmison: check brick processes & logs to see if there's a problem with the bricks themselves, otherwise could be a network issue
15:51 kedmison Fair enough; but the client in this case is actually a VM running on the very same glusterfs node to which it is losing communications…
15:53 kedmison so I am a bit puzzled at what could affect that particular network path so badly.
15:54 wushudoin left #gluster
15:57 kedmison my brick logs are full of error-level messages 'transport.address-family not specified'  which has me puzzled, afaik it's OK to not specify the address-family.
15:57 kedmison it should just default to TCP, from what I've read.
15:58 semiosis kedmison: ,,(pasteinfo)
15:58 glusterbot kedmison: Please paste the output of "gluster volume info" to http://fpaste.org or http://dpaste.org then paste the link that's generated here.
15:59 kedmison http://fpaste.org/23708/99163137/
15:59 glusterbot Title: #23708 Fedora Project Pastebin (at fpaste.org)
16:01 vpshastry joined #gluster
16:11 eryc joined #gluster
16:11 eryc joined #gluster
16:13 Mo_ joined #gluster
16:13 Gualicho joined #gluster
16:13 jdarcy joined #gluster
16:14 Gualicho hello guys
16:14 Gualicho anybody knows what's .glusterfs_back directory for?
16:14 Gualicho I have 2 replicated nodes, and one of them has a very large .glusterfs_back, while on the other it's empty
16:19 bulde joined #gluster
16:21 vpshastry left #gluster
16:26 daMaestro joined #gluster
16:34 atrius joined #gluster
16:35 csshankaravadive any clues on this issue http://www.gluster.org/pipermail/g​luster-users/2013-July/036373.html
16:35 glusterbot <http://goo.gl/02Pr3> (at www.gluster.org)
16:36 csshankaravadive could this be a bug?
16:38 kedmison @csshankaravadive: does /dev/fuse actually exist?  (i.e. can you 'ls /dev/fuse')?
16:39 kedmison @csshankaravadive: I seem to remember running into a situation where I had the gluster client packages installed but not the fuse package itself, and some similar messages (although I was mounting the volume from the command line)
16:39 theron joined #gluster
16:39 csshankaravadive @kedmison /dev/fuse exists
16:39 csshankaravadive ls -lhtra /dev/fuse
16:39 csshankaravadive crw-rw-rw- 1 root root 10, 229 Jul  8 03:57 /dev/fuse
16:40 kaptk2 joined #gluster
16:40 csshankaravadive @kedmison how should I verify if Fuse is installed beyond this?
16:42 mkollaro joined #gluster
16:44 mkollaro could somebody give me a use case for Swift+Gluster? I understand that Gluster gets used as the object server for Swift...but why would I user want to do this?
16:44 kedmison @csshankaravadive: I took a look and found that on my CentOS 6.4  client, I didn't need to install the fuse client, but on my CentOS 5.9 client I did have to 'yum install fuse'.
16:45 mkollaro my current idea is that a user already uses Gluster and wants to start using OpenStack, so he uses Swift as a wrapper around it for API compatibility?
16:45 csshankaravadive @kedmison I just did yum install fuse and I got the following output
16:45 csshankaravadive Package fuse-2.8.3-4.el6.x86_64 already installed and latest version
16:45 csshankaravadive which confirms that fuse is installed
16:46 ctria joined #gluster
16:46 csshankaravadive @kedmison I have been using amazon Linux
16:46 csshankaravadive where I never saw this problem
16:46 csshankaravadive I am just seeing this with OVM
16:48 kedmison @csshankaravadive: I'm a bit of a newbie to gluster myself, so I don't know where to look beyond that.
16:48 Technicool mkollaro, typically it wouldn't be for the average user but for someone who wants to interact with objects in a programmatic fashion....one example would be for updating pictures to a website
16:49 Technicool mkollaro, you could then set tags like "July" and set logic to expire them, as an arbitrary example
16:49 kedmison @csshankaravadive: are there differences in the firewall rules, or in SELinux configurations?
16:49 csshankaravadive @kedmison, thanks for your inputs. Maybe there is a clue here in the form of fuse
16:50 mkollaro Technicool: but doesn't Swift already support that? you could just add metadata to it and Swift has expiration times too
16:50 csshankaravadive @kedmison SELinux is disabled in both VMs
16:50 lalatenduM joined #gluster
16:51 csshankaravadive @kedmison this is in EC2
16:51 csshankaravadive so all the clients I try to mount use same security groups
16:52 timothy joined #gluster
16:55 kedmison @csshankaravadive: I don't think I can offer much help beyond that. sorry.
16:56 csshankaravadive @kedmison. Np. Thanks for your time. I appreciate it
17:01 chirino joined #gluster
17:06 Technicool mkollaro, correct...the difference in using gluster for swift is getting things like replication and autofailover, so I like to think of it as swift with the benefits of gluster included
17:07 rastar joined #gluster
17:09 mkollaro Technicool: well, that is exactly what I am wondering about...what are the benefits of Gluster behind a Swift wrapper that Swift doesn't provide itself? I'm not familiar with Gluster, all I know is that it also provides block storage, but that doesn't get used when accessed trough Swift's API
17:13 failshell joined #gluster
17:16 jbrooks joined #gluster
17:26 andreask joined #gluster
17:29 kkeithley mkollaro: Gluster does not provide block (device) storage.  Another thing that Gluster does, that you won't get from vanilla Swift, is the ability to dynamically grow your storage.
17:31 mkollaro kkeithley: it doesn't provide block storage? then how can it be used as a backend for Cinder? http://www.gluster.org/community/docu​mentation/index.php/GlusterFS_Cinder
17:31 glusterbot <http://goo.gl/Q9CVV> (at www.gluster.org)
17:32 mkollaro kkeithley: and what do you mean by "dynamically grow your storage"?
17:39 kkeithley dynamically grow your storage, i.e. add more bricks when the existing ones get full, without shutting anything down.
17:41 kkeithley Cinder does its thing on top of Gluster, but Gluster by itself has not block storage capability. FWIW, in 3.4 there's a bd translator that provide a pseudo block device to oVirt, but it's not a real block device either.
17:41 kkeithley gack, my typing today is the pits
17:42 ShaharL joined #gluster
17:44 mkollaro kkeithley: well, AFAIK Swift can do that too (dynamically add storage), so I still don't know what are the extra features...
17:46 mkollaro kkeithley: so, it just creates a big file that serves as a block storage device and splits it into multiple smaller objects?
17:50 kkeithley I haven't seen the ability to add more disks to Swift's ring, not without shutting Swift down, changing the config and restarting. Maybe I'm wrong.  And Technicool already mentioned Gluster replication and fail-over.
17:52 FilipeMaia joined #gluster
17:54 chirino joined #gluster
18:06 ultrabizweb joined #gluster
18:07 jdarcy joined #gluster
18:11 joelwallis joined #gluster
18:16 jdarcy joined #gluster
18:17 FilipeMaia joined #gluster
18:24 jdarcy joined #gluster
18:38 RangerRick12 joined #gluster
18:43 jdarcy joined #gluster
18:44 FilipeMaia joined #gluster
18:46 csshankaravadive left #gluster
18:52 RangerRick13 joined #gluster
19:06 cfeller joined #gluster
19:07 jdarcy joined #gluster
19:09 rcheleguini joined #gluster
19:14 JoeJulian mkollaro: GlusterFS also gives you a unified posix filesystem in addition to swift.
19:15 mkollaro JoeJulian: but I guess the user doesn't have use of it when it's wrapped around by Swift...I'm trying to find out why would a user want to use Swift+Gluster
19:16 JoeJulian Depends on your definition of "user".
19:17 JoeJulian You could also ask the same question, "Why would the user need an html access to a posix filesystem?"
19:21 failshell i wonder, i use to run a distributed replicated cluster. i had roughly 1.1TB on it. now that i migrated to a replicated only cluster, i only have ~600GB. why?
19:22 eightyeight joined #gluster
19:23 JoeJulian Not enough information.
19:24 JoeJulian V = s * b / r ( Volume = servers * bricks / replica )
19:25 JoeJulian assuming all bricks are the same size.
19:27 kaptk2 joined #gluster
19:32 mkollaro JoeJulian: by user, I mean the admin of the datacenter
19:35 eightyeight joined #gluster
19:38 andreask joined #gluster
19:44 JoeJulian mkollaro: That's what I mean, too, but it depends on what that admin's use case is.
19:44 an joined #gluster
19:46 JoeJulian If you're looking for justification to sell RHS to a datacenter... well, like in all sales, it's not about justification, it's about analyzing their needs and choosing the solution that helps them with that need and the future needs they haven't thought of yet.
19:46 mkollaro JoeJulian: yes...I'm trying to figure out what a use case for Swift+Gluster would be, i.e., why would a datacenter operator choose to use it like this?
19:48 unclean left #gluster
19:48 JoeJulian I have an application I'm starting that doesn't use it, but because I know the tool is available, I'm considering making use of it. Swift to do content delivery and a data analysis application that uses the posix interface.
19:58 johnmark_ mkollaro: because moving data around is expensive
19:58 johnmark_ ...computationally
20:02 mkollaro johnmark_: you mean when a user is already using Gluster and switching to Swift would require moving the data?
20:03 johnmark mkollaro: either way. being able to access teh same data via different APIs is incredibly powerful
20:03 johnmark and there are customers who pay for exactly that. It's the same reason we are developing an HDFS shim
20:04 mkollaro johnmark: so, one use case is when the user already has his data on gluster, the other is when he wants to access it as a FS in addition to the REST API, right?
20:05 johnmark mkollaro: right
20:08 mkollaro johnmark: any other ideas?
20:17 mkollaro thanks for the help, btw :)
20:18 failshell im happy Swift is available with RHS
20:18 failshell we're looking into using it
20:19 failshell right now, i have nginx serving a few volumes, but i like all the things wrapped up with Swift
20:19 failshell like auth
20:29 rhys joined #gluster
20:30 rhys can someone tell me a gracefull way to remove a gluster brick?
20:40 semiosis johnmark: welcome back
20:40 semiosis johnmark: any word on getting a static export of C.G.O?
20:41 johnmark semiosis: no :(
20:42 johnmark it's a few lines down my priority list, at the top of which, btw, is putting together the awestuctified site
20:48 * semiosis looks up awestuct
20:48 semiosis cool
20:50 semiosis johnmark: not sure if you caught this but i've taken on the java integration project... https://forge.gluster.org/libgfapi-jni (with chirino's help) and https://github.com/semiosis​/glusterfs-java-filesystem
20:50 glusterbot <http://goo.gl/KNsBZ> (at github.com)
20:50 yinyin joined #gluster
20:50 semiosis need to put glusterfs-java-filesystem on the forge too
20:50 * semiosis goes to do that
20:50 kedmison joined #gluster
20:51 Technicool joined #gluster
20:52 aknapp joined #gluster
21:00 jdarcy joined #gluster
21:00 johnmark semiosis: approved :)
21:00 semiosis thx
21:10 alfacard joined #gluster
21:10 * alfacard is backkk
21:13 mooperd joined #gluster
21:24 johnmark semiosis: by the way, that looks really interesting. looking forward to seeing what you make of it
21:25 aknapp left #gluster
21:26 semiosis the first real world application I have in mind is deploying a WAR to elastic beanstalk that can access my glusterfs volumes in EC2
21:27 semiosis hacking a fuse mount into elastic beanstalk is possible in theory, but imho unmaintainable
21:29 semiosis anyway, once the jni stuff is usable it will enable glusterfs client apps to be written in any jvm language
21:29 semiosis so hopefully lots of people come up with interesting ideas
21:29 semiosis beyond by boring web apps
21:29 semiosis s/by/my/
21:29 glusterbot semiosis: Error: I couldn't find a message matching that criteria in my history of 1000 messages.
21:29 semiosis glusterbot: meh
21:29 glusterbot semiosis: I'm not happy about it either
21:30 johnmark lulz
21:30 johnmark semiosis: that sounds awesome
21:35 FilipeMaia joined #gluster
21:45 RangerRick13 joined #gluster
22:01 paratai_ joined #gluster
22:03 ninkotech joined #gluster
22:04 _pol joined #gluster
22:05 matiz joined #gluster
22:17 phox joined #gluster
22:18 phox any way to make Gluster clean up all of its accounting crap from having _had_ millions of little files on a volume?  or does it at LEAST reuse this stuff next time you put more files on it?
22:19 bulde joined #gluster
22:23 phox IOW does Gluster have GC?
22:24 phox Bueller?
22:30 semiosis what accounting?
22:30 semiosis phox: what are you talking about?
22:40 phox semiosis: I have a gluster volume that I put thousands or millions of files onto and then deleted them, now the brick has 85G of whatever stuck on it
22:41 semiosis that's odd
22:41 phox although... it might not be that either... it was being buggy when I was doing some of the deletes so maybe it decided that stuff doesn't exist even though it does on the brick
22:41 semiosis how did you delete?  through a client mount i hope
22:41 phox *looks*
22:41 phox local client mount yes
22:41 semiosis hmm
22:41 phox FUSE client, not NFS
22:41 phox gonna go and see what's on the brick
22:41 semiosis only place data could hide is in the .glusterfs directory in the root of the brick
22:41 semiosis although i'm pretty sure that stuff should get deleted along with the files
22:42 semiosis when things are working right
22:42 phox yeah
22:42 phox "when" :P heh
22:42 phox I was having some weird "transport endpoint disconnected" and other FD-in-bad-state errors while deleting and doing other stuff
22:42 phox while deleting, and while reading zillions of like 5MB files for archival
22:43 phox still waiting on du on .glusterfs on the brick
22:43 phox copying some more stuff to it right now too because I wanted to see if adding + deleting more crap will increase usage
22:43 phox there are obviously a _lot_ of files in .glusterfs
22:46 phox _still_ going
22:51 Technicool i saw similar behaviour when i did a remove-brick the other day...after the files rebalanced there was still tons of space being consumed in .glusterfs
22:52 vpshastry joined #gluster
22:52 Technicool since the rebalance worked and the brick was decomm'ed i just deleted the directory from the backend (probably not something i would recommend in production)
22:58 phox semiosis: haha, ok, so I'm assuming there's some sort of reorganization going on here.  as I'm copying MORE stuff onto the volume (but not before, it seems, although that could do with further testing), the space usage is going _down_.
22:59 semiosis phox: news to me
22:59 phox curious
22:59 phox .glusterfs was 12G whenever it finished looking through it
23:00 JoeJulian Technicool: remove-brick leaving stuff behind is expected behavior.
23:00 phox although curiously the dir I'm making is 19G so far and .glusterfs reports as 12G and yet space used on the volume is about 76G right now
23:00 JoeJulian phox: I would expect that's related to your client disconnection issues. Probably left files in a "needs-to-be-healed" state.
23:01 phox JoeJulian: no striping though
23:01 phox does that still happen on a single brick?
23:01 phox and du _should_ be including block round-up, so... not sure how I'm using _so_ much space
23:01 Technicool JoeJulian, there were several issues i think related to the brick running out of space during a copy...all the behaviour from then on was unexpected  ;)
23:01 JoeJulian Hmm, I guess it could... depends on when it lost connection I suppose.
23:02 phox it does look like a bunch of this is ZFS overhead or something that I'm being lied to about by du
23:03 phox -weird-
23:18 recidive joined #gluster
23:20 kedmison joined #gluster
23:40 rcoup joined #gluster
23:46 rcoup hey folks, if I have a dist+replicate volume (eg. 8x2) and I want to drop two pairs of volumes (to 6x2)… I should just be able to do "gluster volume remove-brick myvolume replica 2 node1:brick8 node2:brick8 node1:brick7 node2:brick7" ?
23:47 rcoup or is the "replica 2" bit just for changing the replica-count of a volume?
23:55 vpshastry joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary