Camelia, the Perl 6 bug

IRC log for #gluster, 2013-09-19

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:21 micu2 joined #gluster
00:48 vpshastry joined #gluster
01:03 bala joined #gluster
01:06 lalatenduM joined #gluster
01:07 zapotah joined #gluster
01:07 zapotah joined #gluster
01:26 zapotah joined #gluster
01:26 zapotah joined #gluster
01:52 micu1 joined #gluster
02:04 micu1 joined #gluster
02:10 harish_ joined #gluster
02:14 davinder joined #gluster
02:17 bharata-rao joined #gluster
02:32 \_pol joined #gluster
02:50 Bjorklund Is there anyway to recalcute quota?
02:50 Bjorklund # gluster volume quota v3lq4elxb0fkv8uaeq69s8ovbsuepu list path              limit_set          size
02:50 Bjorklund -----------------------------------------​-----------------------------------------
02:50 Bjorklund 20GB            16384.0PB
02:53 mohankumar joined #gluster
03:05 kshlm joined #gluster
03:20 \_pol joined #gluster
03:32 shubhendu joined #gluster
03:47 itisravi joined #gluster
04:01 vshankar joined #gluster
04:07 wgao joined #gluster
04:08 mohankumar joined #gluster
04:11 vpshastry1 joined #gluster
04:15 ppai joined #gluster
04:17 ndarshan joined #gluster
04:23 hagarth joined #gluster
04:26 kanagaraj joined #gluster
04:31 sgowda joined #gluster
04:40 mohankumar joined #gluster
04:42 shruti joined #gluster
04:43 ndarshan joined #gluster
04:50 hchiramm_ joined #gluster
04:51 bala joined #gluster
04:52 aravindavk joined #gluster
04:59 davinder2 joined #gluster
05:00 lalatenduM joined #gluster
05:03 anands joined #gluster
05:14 kanagaraj joined #gluster
05:19 ajha joined #gluster
05:23 atrius joined #gluster
05:41 nshaikh joined #gluster
05:43 Shdwdrgn joined #gluster
05:46 bala joined #gluster
05:51 mohankumar joined #gluster
05:54 atrius joined #gluster
05:57 aib_007 joined #gluster
05:58 20WAC5O75 joined #gluster
06:01 mohankumar joined #gluster
06:02 ndarshan joined #gluster
06:03 ppai joined #gluster
06:11 itisravi joined #gluster
06:11 anands joined #gluster
06:12 kshlm joined #gluster
06:12 sac joined #gluster
06:12 sac`away joined #gluster
06:12 aravindavk joined #gluster
06:12 shubhendu joined #gluster
06:12 ajha joined #gluster
06:12 sgowda joined #gluster
06:12 vshankar joined #gluster
06:13 hagarth joined #gluster
06:13 vpshastry1 joined #gluster
06:14 shylesh joined #gluster
06:19 rastar joined #gluster
06:19 lalatenduM joined #gluster
06:20 bulde joined #gluster
06:22 kanagaraj joined #gluster
06:26 jtux joined #gluster
06:28 ppai joined #gluster
06:28 ndarshan joined #gluster
06:32 mohankumar joined #gluster
06:34 bala joined #gluster
06:37 CheRi joined #gluster
06:45 kPb_in_ joined #gluster
06:46 ndarshan joined #gluster
06:53 davinder joined #gluster
06:56 jcsp joined #gluster
06:59 nasso joined #gluster
06:59 nasso hi! im having issues with the gluster volume quota command. it takes 4-10 seconds to list the quota of a volume
07:00 nasso it there any way to get around this and check the used space for a volume faster?
07:00 Guest60861 joined #gluster
07:03 ctria joined #gluster
07:09 eseyman joined #gluster
07:17 meghanam joined #gluster
07:17 meghanam_ joined #gluster
07:28 nshaikh joined #gluster
07:29 raghu joined #gluster
07:29 andreask joined #gluster
07:34 ricky-ticky joined #gluster
07:50 kshlm joined #gluster
07:50 kshlm joined #gluster
08:02 roidelapluie joined #gluster
08:03 ProT-0-TypE joined #gluster
08:05 gurgy joined #gluster
08:13 psharma joined #gluster
08:14 dneary joined #gluster
08:34 monotek left #gluster
08:39 Norky joined #gluster
08:40 gurgy joined #gluster
08:40 hagarth joined #gluster
08:42 DataBeaver joined #gluster
08:48 ngoswami joined #gluster
09:02 vimal joined #gluster
09:11 hybrid5121 joined #gluster
09:17 manik joined #gluster
09:21 vpshastry1 joined #gluster
09:24 ppai joined #gluster
09:32 mooperd joined #gluster
09:36 bulde1 joined #gluster
09:39 hchiramm_ joined #gluster
09:42 jcsp joined #gluster
09:43 ppai joined #gluster
09:48 vpshastry2 joined #gluster
09:50 jporterfield joined #gluster
09:51 sgowda joined #gluster
09:56 dseira Hi
09:56 glusterbot dseira: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
10:00 samppah hello hello
10:02 Rocky__ hi
10:02 glusterbot Rocky__: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
10:02 Rocky__ hehe
10:02 dseira I'm testing glusterFS using SSD as the volumes storage; inside the volumes I'm going to store iso images. I don't know which is the best FS to use in for the volumes
10:02 Rocky__ left #gluster
10:02 dseira what do you think?
10:02 samppah dseira: red hat recommends XFS
10:03 dseira for now, I tested with ext4
10:03 dseira and the performance was very poor
10:06 samppah dseira: what version you are using?
10:06 elyograg there's a bug with ext4 and gluster that's not fixed until the 3.4 version, so if you're not running that version, ext4 is horribly broken.
10:06 elyograg on most typical distros, anyway.
10:06 emil joined #gluster
10:06 nasso joined #gluster
10:06 dseira I've used gluster 3.4
10:07 dseira the problem is that I'm trying glusterFS as a storage for the ESXi virtual machines with EXT4 FS and the performance tests was poor
10:08 dseira I only want 2 replicas for the volumes
10:08 dseira I'm going to test this scenario with xfs
10:09 elyograg from what I understand (and I'm certainly not an expert), they've incorporated a native gluster client into QEMU, but it seems unlikely that vmware has anything similar.
10:09 elyograg so you'd have to have a locally mounted glusterfs filesystem on the hypervisor or access it over NFS.
10:09 dseira yes, the integration with qemu is done through libfgapi
10:10 dseira for the esxi i've used nfs
10:10 elyograg and now i'm going to bed.  afk.
10:10 dseira thanks
10:10 dseira bye
10:19 samppah dseira: i'm not sure if there is much difference with xfs and ext4
10:19 samppah can you please tell more about what you mean with poor performance?
10:19 samppah what kind of setup you have etc?
10:20 dneary joined #gluster
10:25 dseira for example, for test the performance I use the fio tool. If I use the tool (random-writes) directly in the SSD drive, the IOPS are about 27000, and if I do the same test into a virtual machine located in a glusterFS volume the IOPS decrease until ~5000
10:26 dseira the glusterFS is configured with only 1 replica (for testing) in the SSD drive
10:28 dseira I read about the tips to improve the performance of the glusterFS but I don't know if for this specific scenario (big files with lots of writes) there are any special parameters
10:31 samppah gluster volume volname replica 2 or replica 1?
10:32 samppah are the machines in same network?
10:35 vshankar joined #gluster
10:35 dseira replica 1, and yes the machines are in the same network
10:36 dseira replica 1 only for testing
10:37 samppah ok
10:38 samppah how many nodes you have in gluster setup?
10:42 dseira only one
10:43 dseira I only created a brick that is located in the ssd
10:43 samppah ok, have you tried running fio in several VM at same time?
10:44 dseira not fio, but in one VM I run fio and in another VM (stresslinux) I stress the cpu and the disk at maximum
10:44 samppah did if affect write speeds of fio?
10:44 dseira and the fio performance rise down until ~1000 or less
10:44 dseira yes, a lot
10:45 dseira but I think this behaviour is more normal if the gluster doesn't have volumes quotas per IOPS
10:46 dseira to configure the brick I've used this option: performance.io-thread-count: 64
10:46 sgowda joined #gluster
10:49 dneary joined #gluster
10:51 vimal joined #gluster
10:52 shubhendu joined #gluster
10:57 jtux joined #gluster
11:01 mooperd joined #gluster
11:03 jporterfield joined #gluster
11:04 gurgy joined #gluster
11:11 andreask joined #gluster
11:17 jclift joined #gluster
11:26 jporterfield joined #gluster
11:32 jporterfield joined #gluster
11:39 mooperd joined #gluster
11:43 shubhendu joined #gluster
11:44 jporterfield joined #gluster
11:47 vpshastry joined #gluster
11:49 CheRi joined #gluster
11:49 ppai joined #gluster
11:55 jporterfield joined #gluster
12:00 B21956 joined #gluster
12:02 edward1 joined #gluster
12:09 vpshastry left #gluster
12:34 mooperd joined #gluster
12:36 bennyturns joined #gluster
12:41 mooperd joined #gluster
12:46 CheRi joined #gluster
12:47 gurgy joined #gluster
12:52 fleducquede joined #gluster
12:53 ajha joined #gluster
12:56 rcheleguini joined #gluster
13:01 vshankar joined #gluster
13:05 gurgy joined #gluster
13:12 harish_ joined #gluster
13:14 gurgy joined #gluster
13:15 Dave_H joined #gluster
13:19 ndarshan left #gluster
13:35 mooperd joined #gluster
13:45 Chocobo In the documentation it mentions striping should only be considered for high concurrency enviroments accessing very large files.   What does "very large" mean?
13:47 jporterfield joined #gluster
13:48 manik joined #gluster
13:50 kkeithley_ bigger than any individual brick can hold is one definition
13:50 Chocobo kkeithley_: thanks.  That would make sense
13:58 dneary joined #gluster
13:59 mooperd joined #gluster
14:00 hagarth joined #gluster
14:00 mooperd joined #gluster
14:03 jdarcy joined #gluster
14:06 cmcdermott1 joined #gluster
14:07 bugs_ joined #gluster
14:11 cmcdermott1 Is there any good documentation available listing the TCP ports that gluster uses to communicate? 275384
14:12 cmcdermott1 I'm actually mostly curious about the non-privileged ports it uses when initiating outbound connections
14:12 cmcdermott1 I'm having a hard time defining appropriate layer 2 ACL's because it seems like gluster is choosing reserved ports like 1019, 1021, etc
14:13 StarBeast joined #gluster
14:15 cmcdermott1 Also, this page: http://www.gluster.org/community/documentat​ion/index.php/Basic_Gluster_Troubleshooting says that ports 34865-34867 are used for "the inline gluster nfs server", but if I don't open those between the peers, they become disconnected
14:15 glusterbot <http://goo.gl/7m2Ln> (at www.gluster.org)
14:18 _NiC joined #gluster
14:18 partner joined #gluster
14:18 wushudoin joined #gluster
14:26 vpshastry joined #gluster
14:27 StarBeast joined #gluster
14:49 ndevos ~ports | cmcdermott1
14:49 glusterbot cmcdermott1: glusterd's management port is 24007/tcp and 24008/tcp if you use rdma. Bricks (glusterfsd) use 24009 & up for <3.4 and 49152 & up for 3.4. (Deleted volumes do not reset this counter.) Additionally it will listen on 38465-38467/tcp for nfs, also 38468 for NLM since 3.3.0. NFS also depends on rpcbind/portmap on port 111 and 2049 since 3.4.
14:53 shylesh joined #gluster
14:55 jporterfield joined #gluster
14:55 zaitcev joined #gluster
14:55 rgustafs joined #gluster
15:10 zerick joined #gluster
15:16 cmcdermott joined #gluster
15:20 vpshastry joined #gluster
15:20 vpshastry left #gluster
15:23 mooperd joined #gluster
15:28 XpX joined #gluster
15:29 mooperd joined #gluster
15:34 jruggiero joined #gluster
15:36 jruggiero left #gluster
15:40 lalatenduM joined #gluster
15:43 neofob joined #gluster
15:50 mooperd joined #gluster
15:53 LoudNoises joined #gluster
16:00 paratai_ joined #gluster
16:07 theron joined #gluster
16:22 kPb_in joined #gluster
16:24 glusterbot New news from newglusterbugs: [Bug 1009980] Glusterd won't start on Fedora19 <http://goo.gl/a8uHQ5>
16:32 cmcdermott left #gluster
16:32 Mo__ joined #gluster
16:37 sprachgenerator joined #gluster
16:41 \_pol joined #gluster
17:06 kaptk2 joined #gluster
17:11 SpeeR joined #gluster
17:22 B21956 joined #gluster
17:33 atrius joined #gluster
17:34 Elend joined #gluster
17:35 Elend hi there
17:36 StarBeast joined #gluster
17:44 zapotah joined #gluster
17:45 glusterbot New news from resolvedglusterbugs: [Bug 953694] Requirements of Samba VFS plugin for glusterfs <http://goo.gl/v7g29>
17:57 Elend I have a 3.3.0 setup with a volume replica 2 which have an issue with syncing some files
17:58 hagarth1 joined #gluster
17:59 Elend the vol heal "volume" info display 200 entries in the first brick which are all identified by gfid and the split-brain crash after the first line which is the last file listed in the info command
18:00 Elend if i check in the .glusterfs/$id:0:2/$id:2:2/ folder, the gfid file is present with the hard link count of 1 on the server hosting the first brick, but absent in the second
18:01 Elend i tried to view the content and it seems to be a program install folder we've copied in another volume
18:02 Elend the question is "how do i get rid of it properly"
18:02 Elend is stopping the volume and manually deleting these files a good way ?
18:03 Elend if someone can help
18:05 tziOm joined #gluster
18:17 vpshastry joined #gluster
18:21 elend2 joined #gluster
18:21 elend3 joined #gluster
18:25 vpshastry left #gluster
18:26 marcoceppi joined #gluster
18:27 * marcoceppi waves @ semiosis
18:44 ctria joined #gluster
18:45 semiosis hi marcoceppi
18:47 jporterfield joined #gluster
19:00 jporterfield joined #gluster
19:00 kkeithley1 joined #gluster
19:03 axisys_ joined #gluster
19:09 lpabon joined #gluster
19:18 purpleidea ke4qqq: w00t
19:21 jporterfield joined #gluster
19:31 jskinner_ joined #gluster
19:43 jporterfield joined #gluster
19:52 jporterfield joined #gluster
20:00 ChiTo joined #gluster
20:00 ChiTo Hi everybody
20:00 ChiTo Does Gluster support compression and deduplication?
20:04 jdarcy chirino: Not currently.
20:12 l4v joined #gluster
20:13 davinder joined #gluster
20:13 jporterfield joined #gluster
20:15 zerick joined #gluster
20:16 ChiTo Does somebody have tried ZFS on top of glusterfs to leverage dedup and compression features?
20:22 tziOm joined #gluster
20:24 jporterfield joined #gluster
20:26 SpeeR joined #gluster
20:57 glusterbot New news from newglusterbugs: [Bug 1010068] enhancement: Add --wait switch to cause glusterd to stay in the foreground until child services are started <http://goo.gl/QBlOM8>
21:01 atrius joined #gluster
21:11 jporterfield joined #gluster
21:12 JoeJulian ChiTo: I've heard of people that started using it, but none of them have reported results.
21:13 purpleidea ChiTo: you mean glusterfs on top of ZFS? (one person does this but it's not recommended) Long term licensing issues with ZFS won't let it get anywhere, especially not supported by RedHat...
21:14 purpleidea FYI: JoeJulian's laptop just exploded
21:22 B21956 left #gluster
21:24 Bjorklund joined #gluster
21:42 ChiTo purpleidea, JoeJulian thank you very much, i am looking for some dedup and compression capabalities built in the filesystem
21:57 glusterbot New news from newglusterbugs: [Bug 1004519] SMB:smbd crashes while doing volume operations <http://goo.gl/DMsNHh>
22:04 _br_ joined #gluster
22:06 daMaestro joined #gluster
22:07 efries joined #gluster
22:19 andreask joined #gluster
22:23 Amanda joined #gluster
22:27 ndk joined #gluster
22:30 y4m4 joined #gluster
22:34 uebera|| joined #gluster
22:35 MinhP joined #gluster
22:48 MinhP joined #gluster
23:09 jporterfield joined #gluster
23:12 rcheleguini joined #gluster
23:47 johnbot11 joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary