Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-04-20

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:06 plarsen joined #gluster
00:29 plarsen joined #gluster
00:33 bene2 joined #gluster
00:37 badone__ joined #gluster
00:40 glusterbot News from newglusterbugs: [Bug 1210686] DHT :- core was generated while running few test on volume <https://bugzilla.redhat.com/show_bug.cgi?id=1210686>
00:40 glusterbot News from newglusterbugs: [Bug 1207534] glusterd : unable to start glusterd after hard reboot as one of the peer info file is truncated to 0 byte <https://bugzilla.redhat.com/show_bug.cgi?id=1207534>
00:42 glusterbot News from resolvedglusterbugs: [Bug 859826] Volume start fails with error 'volume start: <vol_name>: failed <https://bugzilla.redhat.com/show_bug.cgi?id=859826>
00:42 glusterbot News from resolvedglusterbugs: [Bug 786006] glusterd friend/op sm could be used even before its initialisation <https://bugzilla.redhat.com/show_bug.cgi?id=786006>
00:42 glusterbot News from resolvedglusterbugs: [Bug 806996] [FEAT] Provide pre/post glusterd operation hooks for certain commands <https://bugzilla.redhat.com/show_bug.cgi?id=806996>
00:42 glusterbot News from resolvedglusterbugs: [Bug 830718] CTDB : hook script 'S29CTDBsetup.sh' fail to mount replicated volume on 'Gluster/lock', <https://bugzilla.redhat.com/show_bug.cgi?id=830718>
00:42 glusterbot News from resolvedglusterbugs: [Bug 836448] CTDB: Noticeable delay in CTDB failover(ranging between 6-14 mins) <https://bugzilla.redhat.com/show_bug.cgi?id=836448>
00:42 glusterbot News from resolvedglusterbugs: [Bug 1077516] [RFE] :-  Move the container for changelogs from /var/run to /var/lib/misc <https://bugzilla.redhat.com/show_bug.cgi?id=1077516>
00:42 glusterbot News from resolvedglusterbugs: [Bug 882780] make fails with error 'cli-xml-output.c:3173:48: error: unknown type name ‘xmlTextWriterPtr’  make[2]: *** [cli-xml-output.o] Error 1 make[1]: *** [install-recursive] Error 1" <https://bugzilla.redhat.com/show_bug.cgi?id=882780>
00:42 glusterbot News from resolvedglusterbugs: [Bug 773187] [glusterfs-3.3.0qa19]: directory entry not present in first subvolume <https://bugzilla.redhat.com/show_bug.cgi?id=773187>
00:42 glusterbot News from resolvedglusterbugs: [Bug 835494] Volume creation fails and gives error "<brickname> or a prefix of it is already part of a volume", eventhough that brick is not part of any volume. <https://bugzilla.redhat.com/show_bug.cgi?id=835494>
00:42 glusterbot News from resolvedglusterbugs: [Bug 868796] glusterd crashed <https://bugzilla.redhat.com/show_bug.cgi?id=868796>
00:42 glusterbot News from resolvedglusterbugs: [Bug 868801] gluster volume creation says 'success' but volume does not exist on any of the peers <https://bugzilla.redhat.com/show_bug.cgi?id=868801>
00:42 glusterbot News from resolvedglusterbugs: [Bug 795289] trusted.glusterfs.pathinfo returns I/O error for a dht volume <https://bugzilla.redhat.com/show_bug.cgi?id=795289>
00:42 glusterbot News from resolvedglusterbugs: [Bug 798864] Fix rebalance check_free_space logic <https://bugzilla.redhat.com/show_bug.cgi?id=798864>
00:42 glusterbot News from resolvedglusterbugs: [Bug 819444] for few directories, ls command is giving 'Invalid argument' when one of the server(brick, distributed volume) is down <https://bugzilla.redhat.com/show_bug.cgi?id=819444>
00:54 Jmainguy glustefs doesnt play nice with Selinux on centos7
00:54 Jmainguy is that correct?
00:57 Jmainguy nvm, setsebool -P virt_use_fusefs 1
00:57 Jmainguy fixed my issue
00:57 Jmainguy trying to backup virt to fusefs
01:18 brianw joined #gluster
01:48 Le22S joined #gluster
01:48 julim joined #gluster
01:53 plarsen joined #gluster
01:59 nangthang joined #gluster
02:03 hagarth joined #gluster
02:27 zombiejebus joined #gluster
02:49 bharata-rao joined #gluster
03:06 plarsen joined #gluster
03:10 badone_ joined #gluster
03:14 kdhananjay joined #gluster
03:32 nbalacha joined #gluster
03:34 atinmu joined #gluster
03:37 overclk joined #gluster
03:45 wkf joined #gluster
03:47 itisravi joined #gluster
03:49 sripathi joined #gluster
03:50 kshlm joined #gluster
03:51 kumar joined #gluster
03:57 kaushal_ joined #gluster
04:09 overclk_ joined #gluster
04:18 DV joined #gluster
04:20 kshlm joined #gluster
04:21 nbalacha joined #gluster
04:22 shubhendu joined #gluster
04:27 DV joined #gluster
04:27 jiffin joined #gluster
04:28 nbalacha joined #gluster
04:32 DV joined #gluster
04:34 kanagaraj joined #gluster
04:36 schandra joined #gluster
04:43 soumya joined #gluster
04:51 overclk_ joined #gluster
04:51 Ara4Sh joined #gluster
04:56 spandit joined #gluster
04:56 kaushal_ joined #gluster
05:03 rafi joined #gluster
05:05 ndarshan joined #gluster
05:05 kotreshhr joined #gluster
05:06 maveric_amitc_ joined #gluster
05:07 Bhaskarakiran joined #gluster
05:10 deepakcs joined #gluster
05:11 RameshN joined #gluster
05:14 gem joined #gluster
05:16 anrao joined #gluster
05:19 saurabh joined #gluster
05:22 meghanam joined #gluster
05:29 kdhananjay joined #gluster
05:38 sakshi joined #gluster
05:40 aravindavk joined #gluster
05:41 shubhendu_ joined #gluster
05:47 vimal joined #gluster
05:48 schandra joined #gluster
05:48 kotreshhr joined #gluster
05:49 lalatenduM joined #gluster
05:50 raghu joined #gluster
05:50 Ara4Sh joined #gluster
05:55 gem_ joined #gluster
05:59 pppp joined #gluster
06:00 hagarth joined #gluster
06:06 anil joined #gluster
06:11 overclk joined #gluster
06:11 glusterbot News from newglusterbugs: [Bug 1207215] Data Tiering:Remove brick on a tier volume fails <https://bugzilla.redhat.com/show_bug.cgi?id=1207215>
06:11 glusterbot News from newglusterbugs: [Bug 1207227] Data Tiering:remove cold/hot brick seems to be behaving like or emulating detach-tier <https://bugzilla.redhat.com/show_bug.cgi?id=1207227>
06:11 glusterbot News from newglusterbugs: [Bug 1207238] data tiering:force Remove brick is detaching-tier <https://bugzilla.redhat.com/show_bug.cgi?id=1207238>
06:11 glusterbot News from newglusterbugs: [Bug 1206592] Data Tiering:Allow adding brick to hot tier too(or let user choose to add bricks to any tier of their wish) <https://bugzilla.redhat.com/show_bug.cgi?id=1206592>
06:16 ashiq joined #gluster
06:17 poornimag joined #gluster
06:20 atalur joined #gluster
06:24 Anjana joined #gluster
06:24 overclk joined #gluster
06:26 Manikandan joined #gluster
06:26 Manikandan_ joined #gluster
06:28 smohan joined #gluster
06:29 jtux joined #gluster
06:34 karnan joined #gluster
06:36 gem joined #gluster
06:44 shubhendu joined #gluster
06:46 shubhendu_ joined #gluster
06:47 ndarshan joined #gluster
06:48 gem_ joined #gluster
06:48 atinmu joined #gluster
06:49 ashiq joined #gluster
06:49 jtux joined #gluster
06:50 hagarth joined #gluster
06:52 soumya joined #gluster
07:13 gem_ joined #gluster
07:13 deniszh joined #gluster
07:21 hagarth nbalacha: can you please hop on to #gluster-dev?
07:25 atinmu joined #gluster
07:30 nbalacha hagarth: done
07:31 meghanam joined #gluster
07:36 hagarth joined #gluster
07:45 Pupeno joined #gluster
07:46 atinmu joined #gluster
07:48 kotreshhr joined #gluster
07:50 DV_ joined #gluster
07:51 fsimonce joined #gluster
07:51 liquidat joined #gluster
07:56 shubhendu_ joined #gluster
07:56 ndarshan joined #gluster
07:57 shubhendu joined #gluster
08:04 mbukatov joined #gluster
08:07 DV_ joined #gluster
08:16 Norky joined #gluster
08:17 Slashman joined #gluster
08:22 rjoseph joined #gluster
08:25 atalur_ joined #gluster
08:26 atalur__ joined #gluster
08:33 ktosiek joined #gluster
08:35 ira joined #gluster
08:42 harish_ joined #gluster
08:43 anrao joined #gluster
08:46 suliba joined #gluster
08:54 kbyrne joined #gluster
08:58 overclk joined #gluster
08:59 ira_ joined #gluster
09:00 Philambdo joined #gluster
09:02 lalatenduM joined #gluster
09:06 hgowtham joined #gluster
09:17 overclk joined #gluster
09:18 DV__ joined #gluster
09:20 atalur joined #gluster
09:23 jiffin joined #gluster
09:25 schandra joined #gluster
09:29 shubhendu joined #gluster
09:31 pcaruana joined #gluster
09:41 dusmant joined #gluster
09:42 glusterbot News from newglusterbugs: [Bug 1213295] Glusterd crashed after updating to 3.8 nightly build <https://bugzilla.redhat.com/show_bug.cgi?id=1213295>
09:45 smohan joined #gluster
09:53 overclk joined #gluster
10:00 twisted` hey, if I set auth.allow on a volume
10:01 twisted` I can still mount it on another machine that isn't in the allow
10:01 twisted` however it doesn't let me access the volume, df just crashes, lsof goes nuts...
10:01 twisted` only way to unmount is to kill the fuse process
10:02 twisted` but I kinda expected that during mount it would throw a "can't access" error vs. making the machine crash
10:04 haomaiwang joined #gluster
10:06 ashiq joined #gluster
10:07 soumya joined #gluster
10:09 bene2 joined #gluster
10:12 anrao joined #gluster
10:12 glusterbot News from newglusterbugs: [Bug 1213304] nfs-ganesha: using features.enable command the nfs-ganesha process does come up on all four nodes <https://bugzilla.redhat.com/show_bug.cgi?id=1213304>
10:19 haomaiwa_ joined #gluster
10:22 schandra joined #gluster
10:26 ashiq joined #gluster
10:52 p8952 Is there an easy way to see the replication status between two bricks?
10:53 p8952 For example, I have a volume with two bricks configured to replicate and I add a third brick. How do I know when all data has been replicated?
11:01 DV joined #gluster
11:12 glusterbot News from newglusterbugs: [Bug 1213349] [Snapshot] Scheduler should check vol-name exists or not  before adding scheduled jobs <https://bugzilla.redhat.com/show_bug.cgi?id=1213349>
11:12 glusterbot News from newglusterbugs: [Bug 1213352] nfs-ganesha: HA issue, the iozone process is not moving ahead, once the nfs-ganesha is killed <https://bugzilla.redhat.com/show_bug.cgi?id=1213352>
11:15 gem_ joined #gluster
11:21 soumya joined #gluster
11:21 meghanam joined #gluster
11:21 T0aD joined #gluster
11:22 overclk joined #gluster
11:22 Bhaskarakiran joined #gluster
11:23 atinmu joined #gluster
11:25 [Enrico] joined #gluster
11:26 shpank joined #gluster
11:28 kkeithley1 joined #gluster
11:42 soumya joined #gluster
11:45 glusterbot News from resolvedglusterbugs: [Bug 1114604] [FEAT] Improve SSL support <https://bugzilla.redhat.com/show_bug.cgi?id=1114604>
11:46 jiffin1 joined #gluster
11:46 atinmu joined #gluster
11:49 Anjana joined #gluster
11:53 edwardm61 joined #gluster
11:57 rafi1 joined #gluster
12:00 B21956 joined #gluster
12:06 kdhananjay joined #gluster
12:12 itisravi joined #gluster
12:12 Gill_ joined #gluster
12:15 rjoseph joined #gluster
12:17 Anjana joined #gluster
12:17 LebedevRI joined #gluster
12:17 poornimag joined #gluster
12:20 jiffin joined #gluster
12:23 aravindavk joined #gluster
12:28 Slashman joined #gluster
12:41 Anjana joined #gluster
12:43 glusterbot News from newglusterbugs: [Bug 1213380] Data Tiering: Tracker bug for disallowing attach and detach brick on a tiered volume(for 3.7 only) <https://bugzilla.redhat.com/show_bug.cgi?id=1213380>
12:43 vimal joined #gluster
12:44 ghenry joined #gluster
12:49 bene2 joined #gluster
12:51 rafi joined #gluster
13:02 DV__ joined #gluster
13:14 rjoseph joined #gluster
13:15 glusterbot News from resolvedglusterbugs: [Bug 1210686] DHT :- core was generated while running few test on volume <https://bugzilla.redhat.com/show_bug.cgi?id=1210686>
13:19 plarsen joined #gluster
13:20 overclk joined #gluster
13:28 georgeh-LT2 joined #gluster
13:29 xavih joined #gluster
13:31 hamiller joined #gluster
13:31 bennyturns joined #gluster
13:32 itpings joined #gluster
13:32 dgandhi joined #gluster
13:41 julim joined #gluster
13:43 glusterbot News from newglusterbugs: [Bug 1210404] BVT; Selinux throws AVC errors while running DHT automation on Rhel6.6 <https://bugzilla.redhat.com/show_bug.cgi?id=1210404>
13:44 rwheeler joined #gluster
13:50 itisravi joined #gluster
13:55 soumya joined #gluster
14:05 itspete joined #gluster
14:06 kdhananjay joined #gluster
14:07 xavih joined #gluster
14:07 hagarth joined #gluster
14:07 malevolent joined #gluster
14:09 itspete I'm trying to configure Gluster geo-replication, however I'm running into disk space issues.  I'm trying to transfer 1.8TB of data onto a Gluster volume with 4TB of available space but I'm running out of room before the transfer completes.  The .glusterfs hidden folder is massive - does GlusterFS really use this much overhead or might I be configuring something wrong?
14:10 ctria joined #gluster
14:10 hamiller The actual data files are hardlinked into the hidden .glusterfs folders, they consume inodes but not actual space. Is this a repplicated volume?
14:11 itspete distributed, though it's only distributed on a single node
14:11 hamiller so you have a single XFS 4TB brick?
14:12 poornimag joined #gluster
14:13 aravindavk joined #gluster
14:13 itspete I have a single brick on a filesystem with 4TB free... Should it be a fixed-size XFS brick?
14:13 glusterbot News from newglusterbugs: [Bug 1211570] Data Tiering:UI:when a user looks for detach-tier help, instead command seems to be getting executed <https://bugzilla.redhat.com/show_bug.cgi?id=1211570>
14:13 hamiller Just trying to get a feel for your system
14:15 hamiller If you really have 4TB free, it should work fine. You may have to collect df/du data before and after to see what's consuming the space
14:15 hamiller are you ruinning out of space, or inodes?
14:16 itspete df is saying the FS is at 100%
14:16 kotreshhr joined #gluster
14:17 itspete inode usage is at 8%
14:23 wushudoin joined #gluster
14:23 hamiller sounds like it's disk space then. I would start mapping out where it's being consumed. Gluster should NOT be storing multiple copies.
14:24 itspete Okay, thank you
14:24 jobewan joined #gluster
14:24 rwheeler joined #gluster
14:27 Kins joined #gluster
14:27 DV__ joined #gluster
14:27 atinmu joined #gluster
14:27 atrius joined #gluster
14:28 devilspgd joined #gluster
14:28 trig joined #gluster
14:28 prg3 joined #gluster
14:45 roost joined #gluster
14:48 rwheeler joined #gluster
14:49 devilspgd joined #gluster
14:53 Le22S joined #gluster
15:00 fsimonce` joined #gluster
15:00 fsimonce joined #gluster
15:03 bennyturns joined #gluster
15:04 CyrilPeponnet joined #gluster
15:08 kotreshhr left #gluster
15:11 coredump joined #gluster
15:11 virusuy_ joined #gluster
15:11 jbrooks joined #gluster
15:15 DV joined #gluster
15:17 churnd joined #gluster
15:19 lifeofguenter joined #gluster
15:26 cholcombe joined #gluster
15:28 DV__ joined #gluster
15:29 soumya joined #gluster
15:31 lpabon joined #gluster
15:34 itisravi joined #gluster
15:34 xavih joined #gluster
15:34 malevolent joined #gluster
15:40 mikedep333 joined #gluster
15:47 _Bryan_ joined #gluster
15:51 ProT-0-TypE joined #gluster
15:56 dberry joined #gluster
15:57 dberry joined #gluster
16:01 Fuz1on joined #gluster
16:07 coredump joined #gluster
16:09 Fuz1on Hello guys
16:09 Fuz1on i've been playing with glusterFS 3.6 for a while
16:09 Fuz1on and a lot of improvments have been done on georep
16:09 Fuz1on however i'm still a bit lost sometimes with some basic actions
16:11 sage_ joined #gluster
16:12 ira_ joined #gluster
16:14 Fuz1on especially with geo rep
16:14 Fuz1on i got a fully functionnal geo rep
16:15 Fuz1on but i can get a full sync from a already used volume -> empty volume geo reop volume
16:15 Fuz1on documentation mentions to erase the index
16:16 Fuz1on https://access.redhat.com/documentation/en-US/Red_Hat_Storage/2.1/html/Administration_Guide/ch12s09s02.html
16:20 Fuz1on how do I erase the index ? should I remove .glusterfs from the source volume ?
16:21 R0ok_ joined #gluster
16:37 tg2 semiosis, found out what it was (re 3.6.1 libs sticking) removing glusterfs-client doesn't autoremove glusterfs-common
16:45 soumya joined #gluster
16:52 wkf joined #gluster
16:54 Fuz1on I tried to set geo-replication.indexing == off, remove .glusterfs content and starting the volume again then replication
16:54 Fuz1on no luck
16:54 Fuz1on some files have been transfered but not all
16:55 Fuz1on do you have any idea how to make a full sync between master and slave cluster ?
16:55 haomaiwa_ joined #gluster
16:56 papamoose1 joined #gluster
17:01 ira_ joined #gluster
17:01 gem_ joined #gluster
17:05 hagarth joined #gluster
17:17 deepakcs joined #gluster
17:22 neofob joined #gluster
17:27 gem_ joined #gluster
17:32 Slashman joined #gluster
17:34 gem_ joined #gluster
17:36 anrao joined #gluster
17:37 Alpinist joined #gluster
17:40 wkf joined #gluster
17:45 squizzi joined #gluster
17:54 ekuric joined #gluster
18:01 lifeofguenter joined #gluster
18:02 lifeofguenter joined #gluster
18:02 lifeofguenter joined #gluster
18:04 gem_ joined #gluster
18:05 doctorray joined #gluster
18:14 jbrooks joined #gluster
18:15 doctorray I have a 2 server replica (1 brick each) and I'd like to add a brick to each server to increase the size of the volume, but still maintain 100% of the data on each server.  Would I just use volume add-brick and rebalance?  or is this a "stripe" scenario?
18:18 gem_ joined #gluster
18:25 coredump joined #gluster
18:28 papamoose1 left #gluster
18:48 doctorray Well, just tried it on my test setup.  feedback from the command helped me.. It was just a matter of adding a pair of bricks and re-balancing, no striping involved. Thanks anyway.
18:51 ProT-0-TypE joined #gluster
18:57 rwheeler joined #gluster
19:14 deepakcs joined #gluster
19:15 dbruhn joined #gluster
19:17 plarsen joined #gluster
19:18 deniszh joined #gluster
19:49 foster joined #gluster
20:16 lifeofguenter joined #gluster
20:36 wkf joined #gluster
20:37 chirino_m joined #gluster
21:14 jrdn joined #gluster
21:15 rshade98 ppa
21:15 rshade98 @ppa
21:15 glusterbot rshade98: The official glusterfs packages for Ubuntu are available here: 3.4: http://goo.gl/M9CXF8 3.5: http://goo.gl/6HBwKh 3.6: http://goo.gl/XyYImN -- See more PPAs for QEMU with GlusterFS support, and GlusterFS QA releases at https://launchpad.net/~gluster -- contact semiosis with feedback
21:17 jrdn hello, so I'm trying to get wordpress to sync using glusterfs.  our current deploy process is like capistrano (creates a new releases folder then symlinks to a ./current folder), i want the current folder to be behind gluster, however, i'm not sure if gluster supports this symlink process within our deployment...
21:17 jrdn do i have to rewrite the process?
21:24 ildefonso joined #gluster
21:40 badone_ joined #gluster
21:52 neofob joined #gluster
22:08 prg3 joined #gluster
22:30 prg3 joined #gluster
22:42 neofob left #gluster
23:15 glusterbot News from newglusterbugs: [Bug 1105283] Failure to start geo-replication. <https://bugzilla.redhat.com/show_bug.cgi?id=1105283>
23:15 glusterbot News from newglusterbugs: [Bug 1171313] Failure to sync files to slave with 2 bricks. <https://bugzilla.redhat.com/show_bug.cgi?id=1171313>
23:32 gildub joined #gluster
23:39 cyberbootje joined #gluster
23:52 Gill joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary