Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-06-22

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:01 DV_ joined #gluster
00:02 gildub joined #gluster
00:30 cholcombe joined #gluster
00:53 akay1 joined #gluster
01:07 gildub joined #gluster
02:15 harish joined #gluster
02:19 kdhananjay joined #gluster
02:37 Peppard joined #gluster
02:42 nangthang joined #gluster
03:04 bharata-rao joined #gluster
03:13 maveric_amitc_ joined #gluster
03:41 woakes070048 joined #gluster
03:51 shubhendu joined #gluster
03:53 TheSeven joined #gluster
03:56 sakshi joined #gluster
03:58 itisravi joined #gluster
04:03 atinm joined #gluster
04:08 itisravi_ joined #gluster
04:08 R0ok_ joined #gluster
04:10 badone_ joined #gluster
04:13 badone__ joined #gluster
04:15 badone joined #gluster
04:22 nbalacha joined #gluster
04:26 maveric_amitc_ joined #gluster
04:29 ndarshan joined #gluster
04:37 icosa joined #gluster
04:39 kshlm joined #gluster
04:41 ppai joined #gluster
04:50 ramteid joined #gluster
04:51 zeittunnel joined #gluster
04:54 maveric_amitc_ joined #gluster
04:56 vimal joined #gluster
04:59 meghanam joined #gluster
05:03 hgowtham joined #gluster
05:04 gem joined #gluster
05:04 ashiq joined #gluster
05:07 Manikandan joined #gluster
05:11 pppp joined #gluster
05:15 spandit joined #gluster
05:23 hchiramm joined #gluster
05:26 Bhaskarakiran joined #gluster
05:26 karnan joined #gluster
05:37 maveric_amitc_ joined #gluster
05:44 soumya_ joined #gluster
05:47 nbalacha joined #gluster
05:51 maveric_amitc_ joined #gluster
05:53 kdhananjay joined #gluster
05:54 TvL2386 joined #gluster
05:55 meghanam joined #gluster
06:02 nsoffer joined #gluster
06:04 soumya_ joined #gluster
06:05 gem joined #gluster
06:08 deepakcs joined #gluster
06:15 rejy joined #gluster
06:17 mkzero joined #gluster
06:17 jtux joined #gluster
06:18 raghu joined #gluster
06:18 overclk joined #gluster
06:20 atalur joined #gluster
06:26 gfranx joined #gluster
06:26 spalai joined #gluster
06:28 mbukatov joined #gluster
06:29 overclk joined #gluster
06:34 gem joined #gluster
06:38 anil joined #gluster
06:38 RameshN joined #gluster
06:39 atalur joined #gluster
06:40 harish joined #gluster
06:55 badone_ joined #gluster
06:56 SOLDIERz joined #gluster
07:03 glusterbot News from newglusterbugs: [Bug 1234225] Data Tiering: add tiering set options to volume set help (cluster.tier-demote-frequency and cluster.tier-promote-frequency) <https://bugzilla.redhat.co​m/show_bug.cgi?id=1234225>
07:10 soumya_ joined #gluster
07:10 nbalacha joined #gluster
07:13 glusterbot News from resolvedglusterbugs: [Bug 1221577] glusterfsd crashed on a quota enabled volume where snapshots were scheduled <https://bugzilla.redhat.co​m/show_bug.cgi?id=1221577>
07:13 glusterbot News from resolvedglusterbugs: [Bug 1223739] Quota: Do not allow set/unset  of quota limit in heterogeneous cluster <https://bugzilla.redhat.co​m/show_bug.cgi?id=1223739>
07:13 glusterbot News from resolvedglusterbugs: [Bug 1223798] Quota: spurious failures with quota testcases <https://bugzilla.redhat.co​m/show_bug.cgi?id=1223798>
07:13 glusterbot News from resolvedglusterbugs: [Bug 1211220] quota: ENOTCONN parodically seen in logs when setting hard/soft timeout during I/O. <https://bugzilla.redhat.co​m/show_bug.cgi?id=1211220>
07:13 glusterbot News from resolvedglusterbugs: [Bug 1213364] [RFE] Quota: Make "quota-deem-statfs" option ON, by default, when quota is enabled. <https://bugzilla.redhat.co​m/show_bug.cgi?id=1213364>
07:16 deniszh joined #gluster
07:18 gfranx joined #gluster
07:19 kevein joined #gluster
07:22 zeittunnel joined #gluster
07:22 SOLDIERz joined #gluster
07:28 badone__ joined #gluster
07:28 meghanam joined #gluster
07:29 zeittunnel joined #gluster
07:30 jiffin joined #gluster
07:33 glusterbot News from newglusterbugs: [Bug 1230857] Files migrated should stay on a tier for a full cycle <https://bugzilla.redhat.co​m/show_bug.cgi?id=1230857>
07:40 Manikandan joined #gluster
07:43 glusterbot News from resolvedglusterbugs: [Bug 1191486] daemons abstraction & refactoring <https://bugzilla.redhat.co​m/show_bug.cgi?id=1191486>
07:44 ctria joined #gluster
07:50 abrt joined #gluster
07:51 nangthang joined #gluster
07:55 Slashman joined #gluster
08:01 schandra joined #gluster
08:04 al joined #gluster
08:07 rgustafs joined #gluster
08:09 ConSi joined #gluster
08:14 s19n joined #gluster
08:14 vincent_vdk joined #gluster
08:14 kdhananjay joined #gluster
08:15 kdhananjay joined #gluster
08:17 elico joined #gluster
08:24 ghenry joined #gluster
08:24 ghenry joined #gluster
08:26 NTQ joined #gluster
08:28 gem joined #gluster
08:28 NTQ Hi. I want to use GlusterFS for a replicated Filesystem between three locations. Two of them will have 3 TB space, the last one 1 TB. Before the 1 TB drive reachs its limit I want to add an additional 1 TB drive. Is this possible?
08:43 autoditac joined #gluster
08:45 shubhendu joined #gluster
08:47 ndarshan joined #gluster
08:54 rjoseph joined #gluster
08:56 poornimag joined #gluster
08:56 kdhananjay1 joined #gluster
08:57 mattmcc NTQ: Even if gluster allows it, you probably shouldn't try to set up replication with bricks of different sizes.
08:57 mattmcc After all, the maximum size of the replicated volume would still be the size of the smallest brick, wouldn't it?
09:02 Marqin joined #gluster
09:04 meghanam_ joined #gluster
09:04 soumya_ joined #gluster
09:11 nsoffer joined #gluster
09:17 Manikandan joined #gluster
09:21 soumya_ joined #gluster
09:24 NTQ mattmcc: But even if the first two servers have initially 3 TB space, could I configure GlusterFS to use only 1 TB? Later I can add an other 1 TB drive to the third server and configure the first two to use 2 TB space.
09:25 Guest35559 joined #gluster
09:28 NTQ Another question: Do I need a completely own block device for GlusterFS or it is possible to run GlusterFS simply on a single directory?
09:29 poornimag joined #gluster
09:30 msvbhat NTQ: It's possible to run glusterfs on a single directory
09:30 NTQ Cool
09:31 mattmcc NTQ: You could certainly allocate only 1TB bricks on your machines that have 3TB free.
09:32 gfranx joined #gluster
09:33 Guest84694 joined #gluster
09:33 gem joined #gluster
09:34 rjoseph joined #gluster
09:35 Manikandan joined #gluster
09:36 s19n Hi all! in the last meeting minutes I read that "3.6.4 will likely get released later this week"
09:36 gem joined #gluster
09:37 s19n I'd like to know if https://bugzilla.redhat.co​m/show_bug.cgi?id=1113778 and https://bugzilla.redhat.co​m/show_bug.cgi?id=1168897 will get a fix in that release
09:37 glusterbot Bug 1113778: medium, unspecified, ---, pkarampu, ASSIGNED , gluster volume heal info keep reports "Volume heal failed"
09:37 glusterbot Bug 1168897: medium, medium, ---, bugs, NEW , Attempt remove-brick after node has terminated in cluster gives error: volume remove-brick commit force: failed: One or more nodes do not support the required op-version. Cluster op-version must atleast be 30600.
09:43 maveric_amitc_ joined #gluster
09:44 kdhananjay joined #gluster
09:47 gem_ joined #gluster
09:49 gem_ joined #gluster
09:53 ndarshan joined #gluster
09:53 shubhendu joined #gluster
09:53 sysconfig joined #gluster
09:55 abrt joined #gluster
10:01 NTQ What happens to a replicated cluster brick if the server was disconnected from network or down for a while? Will it automatically mirror to the current state from an other server after restart?
10:03 spalai left #gluster
10:03 spalai joined #gluster
10:04 glusterbot News from newglusterbugs: [Bug 1229914] glusterfs self heal takes too long following node outage <https://bugzilla.redhat.co​m/show_bug.cgi?id=1229914>
10:16 LebedevRI joined #gluster
10:24 meghanam_ joined #gluster
10:25 soumya_ joined #gluster
10:26 stickyboy joined #gluster
10:26 stickyboy joined #gluster
10:27 elico joined #gluster
10:28 ndarshan joined #gluster
10:29 msvbhat NTQ: When a brick (or node containing the brick) goes down and then comes back up later, the self-heal will actually syncs the data to be brick/node which was down from the which was up
10:34 glusterbot News from newglusterbugs: [Bug 1234296] Quota: Porting logging messages to new logging framework <https://bugzilla.redhat.co​m/show_bug.cgi?id=1234296>
10:34 glusterbot News from newglusterbugs: [Bug 1234297] Quota: Porting logging messages to new logging framework <https://bugzilla.redhat.co​m/show_bug.cgi?id=1234297>
10:34 msvbhat You might have to trigger self-heal by using CLI though
10:35 anrao joined #gluster
10:43 sankarsh` joined #gluster
10:43 VeggieMeat joined #gluster
10:43 khanku joined #gluster
10:43 al joined #gluster
10:43 crashmag joined #gluster
10:43 ctria joined #gluster
10:43 overclk joined #gluster
10:43 hgowtham joined #gluster
10:43 ultrabizweb_ joined #gluster
10:43 deepakcs joined #gluster
10:43 partner joined #gluster
10:44 sage joined #gluster
10:45 NTQ1 joined #gluster
10:46 Pupeno joined #gluster
10:47 [o__o] joined #gluster
10:49 hchiramm joined #gluster
10:51 pppp joined #gluster
10:51 anrao joined #gluster
10:54 NTQ joined #gluster
10:58 harish_ joined #gluster
10:58 Sjors joined #gluster
11:02 spandit joined #gluster
11:02 rjoseph joined #gluster
11:03 poornimag joined #gluster
11:08 gfranx joined #gluster
11:08 Pupeno joined #gluster
11:08 anrao joined #gluster
11:13 gildub joined #gluster
11:14 glusterbot News from resolvedglusterbugs: [Bug 1219358] Disperse volume: client crashed while running iozone <https://bugzilla.redhat.co​m/show_bug.cgi?id=1219358>
11:15 anrao joined #gluster
11:15 atalur joined #gluster
11:16 nbalacha joined #gluster
11:19 soumya_ joined #gluster
11:24 rjoseph joined #gluster
11:24 owlbot joined #gluster
11:24 _Bryan_ joined #gluster
11:29 elico joined #gluster
11:29 Sjors joined #gluster
11:34 glusterbot News from newglusterbugs: [Bug 1234314] cluster.nufa :- Rebalance migrates files according to hash value eventhough cluster.nufa  is on <https://bugzilla.redhat.co​m/show_bug.cgi?id=1234314>
11:35 Bhaskarakiran joined #gluster
11:39 Pupeno joined #gluster
11:39 anrao joined #gluster
11:46 _Bryan_ joined #gluster
11:56 jcastill1 joined #gluster
11:57 zeittunnel joined #gluster
12:02 jcastillo joined #gluster
12:08 jtux joined #gluster
12:13 woakes070048 joined #gluster
12:14 DV__ joined #gluster
12:15 bene2 joined #gluster
12:17 soumya joined #gluster
12:20 poornimag joined #gluster
12:22 rgustafs joined #gluster
12:25 itisravi_ joined #gluster
12:30 spandit joined #gluster
12:30 PatNarciso Good morning Gluster.
12:31 elico left #gluster
12:32 itisravi joined #gluster
12:34 LebedevRI joined #gluster
12:34 glusterbot News from newglusterbugs: [Bug 1134050] Glfs_fini() not freeing the resources <https://bugzilla.redhat.co​m/show_bug.cgi?id=1134050>
12:35 autoditac joined #gluster
12:39 anrao joined #gluster
12:39 ndevos hi PatNarciso, its afternoon here already ;-)
12:42 teknologeek joined #gluster
12:43 teknologeek hi all
12:43 teknologeek I have strange behavior with glusterfs and  nfs client
12:44 teknologeek unix sockets created on the nfs share actually appear as fifo
12:44 teknologeek any clue about this issue ?
12:44 PatNarciso ndevos, where are you today?
12:45 ndevos PatNarciso: I'm near Amsterdam in The Netherlands, and you?
12:46 ndevos teknologeek: uh, I might be confused, but, isnt that expected?
12:47 smohan joined #gluster
12:49 elico joined #gluster
12:49 ndevos teknologeek: ah, wait, no, maybe not...
12:51 bennyturns joined #gluster
12:51 ndevos teknologeek: that definitely sounds like a bug, NFSv3 should support sockets - http://tools.ietf.org/html/rfc1813#page-20
12:51 ndevos teknologeek: please file a bug for this, with some steps/script that reproduce it
12:51 glusterbot https://bugzilla.redhat.com/en​ter_bug.cgi?product=GlusterFS
12:55 autoditac joined #gluster
12:59 chirino joined #gluster
12:59 teknologeek i am pretty embarassed
13:00 teknologeek i have to put my glusterfs in production soon but there are too many bugs
13:00 autoditac joined #gluster
13:00 PatNarciso ndevos, Orlando, Florida.
13:01 B21956 joined #gluster
13:03 nsoffer joined #gluster
13:07 theron joined #gluster
13:07 aaronott joined #gluster
13:09 hagarth joined #gluster
13:11 B21956 joined #gluster
13:11 rwheeler joined #gluster
13:21 B21956 joined #gluster
13:27 foster joined #gluster
13:29 dgandhi joined #gluster
13:30 smohan joined #gluster
13:30 B21956 joined #gluster
13:31 squizzi joined #gluster
13:37 RameshN joined #gluster
13:38 PatNarciso I've got a distributed 10-brick volume (xfs) on a single raid6.  often, 'ls -laR' moves respectfully.  however lately, it has been getting slow... real slow.  making samba browsing unbearable.   any suggestions where I should focus to improve performance?
13:40 krink joined #gluster
13:45 hamiller joined #gluster
13:58 arcolife joined #gluster
14:03 wkf joined #gluster
14:08 bene2 joined #gluster
14:12 karnan joined #gluster
14:14 hchiramm_home joined #gluster
14:17 theron joined #gluster
14:26 soumya joined #gluster
14:33 spalai left #gluster
14:40 Pupeno joined #gluster
14:41 nage joined #gluster
14:47 kdhananjay joined #gluster
14:54 pppp joined #gluster
14:56 RameshN joined #gluster
14:57 nbalacha joined #gluster
14:58 julim joined #gluster
14:58 mrdk joined #gluster
15:00 mrdk in our two-node glusterfs setup, "gluster peer status" displays the line "State: Sent and Received peer request (Connected)". What exactly does that mean? "gluster volume list" does not show the same amount of volumes on the two nodes. Any hints?
15:02 overclk joined #gluster
15:04 DV__ joined #gluster
15:06 kanagaraj joined #gluster
15:06 kanagaraj joined #gluster
15:10 krink joined #gluster
15:31 arcolife joined #gluster
15:32 kanagaraj joined #gluster
15:40 nbalacha joined #gluster
15:40 karnan joined #gluster
15:41 cholcombe joined #gluster
15:46 Gill joined #gluster
15:58 CyrilPeponnet joined #gluster
15:58 CyrilPeponnet joined #gluster
15:58 CyrilPeponnet joined #gluster
15:59 CyrilPeponnet joined #gluster
15:59 CyrilPeponnet joined #gluster
15:59 CyrilPeponnet joined #gluster
16:01 nbalacha joined #gluster
16:04 squizzi joined #gluster
16:06 cholcombe joined #gluster
16:10 shubhendu joined #gluster
16:12 nage joined #gluster
16:13 s19n left #gluster
16:14 maveric_amitc_ joined #gluster
16:18 krink joined #gluster
16:21 elico joined #gluster
16:29 Gill_ joined #gluster
16:30 Gill joined #gluster
16:33 RameshN joined #gluster
16:33 Gill joined #gluster
16:37 Gill left #gluster
16:44 arcolife joined #gluster
17:06 firemanxbr joined #gluster
17:12 B100D9 joined #gluster
17:13 B100D9 Hello everyone. I was wondering if it were possible to break apart a gluster replicated volume?
17:19 RameshN_ joined #gluster
17:31 shubhendu joined #gluster
17:32 Rapture joined #gluster
17:35 Leildin What do you mean by break apart ?
17:43 B100D9 Sorry, I figured it out. I actually wanted to remove a hung brick. And I found the command for it.
18:04 ttkg joined #gluster
18:06 rotbeard joined #gluster
18:32 chirino joined #gluster
18:58 arcolife joined #gluster
19:01 aaronott joined #gluster
19:14 jobewan joined #gluster
19:16 deniszh joined #gluster
19:19 Gill joined #gluster
19:25 chirino joined #gluster
19:38 papamoose joined #gluster
19:53 papamoose left #gluster
19:54 TheSeven error: unpacking of archive failed on file /var/lib/glusterd/hooks/1/delete/post/S57glusterfind-delete-post.py;558867ac:
19:55 TheSeven looks like the glusterfs-server-3.7.2-1.el7.x86_64 package in the centos latest repository is broken
19:55 ndevos TheSeven: yeah, we figured that out too.... kkeithley is working on new RPMs they are expected tomorrow
19:56 ndevos TheSeven: you can "mkdir -p /var/lib/glusterd/hooks/1/delete/post" before installing the rpms as a workaround
19:56 * TheSeven tries
19:57 TheSeven indeed, seems to install fine now
20:00 ndevos its at least a little victory, yay
20:18 woakes070048 joined #gluster
20:27 Gill_ joined #gluster
20:27 DV joined #gluster
20:33 PatNarciso is there a known bug/condition where an acl maybe set on the underlying brick(s), but the native-gluster mount doesn't show the acl?
20:51 Pupeno_ joined #gluster
20:51 badone__ joined #gluster
20:55 nsoffer joined #gluster
21:08 Gill_ joined #gluster
21:15 ndevos PatNarciso: you have to specify the "acl" option when mounting, it is not enabled by default
21:16 PatNarciso gotcha.   happening now.
21:17 PatNarciso apparently in the past, acl was provided to my mount point; and then the mountpoint was changed to nfs, where it wasn't provided.   and permissions are causing a small mess: acl vs non-acl.
21:18 PatNarciso running setfacl -b -k -R; like a boss; and testing results now.
21:30 DV joined #gluster
21:30 Gill_ joined #gluster
21:33 badone__ joined #gluster
21:35 PatNarciso ... and now we're cooking.  thanks ndevos!
21:40 bennyturns joined #gluster
21:42 Pupeno joined #gluster
21:42 ueberall joined #gluster
21:43 akay1 joined #gluster
21:45 lexi2 joined #gluster
21:46 capri joined #gluster
21:54 marcoceppi joined #gluster
21:54 Bosse_ joined #gluster
21:55 siel_ joined #gluster
21:58 necrogami joined #gluster
21:59 hagarth joined #gluster
22:04 ndevos @later tell teknologeek did you file a bug for that fifo/socket problem on nfs? http://review.gluster.org/11355 might be the fix
22:04 glusterbot ndevos: The operation succeeded.
22:08 swebb joined #gluster
22:27 joshin left #gluster
22:28 cyberbootje joined #gluster
22:38 capri joined #gluster
22:47 victori joined #gluster
23:20 sysconfig joined #gluster
23:35 capri joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary