Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-06-26

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:00 kripper joined #gluster
00:00 kripper how can I set the ssh-port for geo-rep?
00:00 kripper non standard 22
00:01 kripper unanswered question: http://stackoverflow.com/questions/27525456/glusterfs-geo-replication-on-non-standard-ssh-port/
00:02 kripper what is the difference with a normal rsync?
00:05 elyograg left #gluster
00:08 kripper left #gluster
00:08 kripper joined #gluster
00:25 hagarth joined #gluster
00:41 gildub joined #gluster
01:01 kripper left #gluster
01:02 harish joined #gluster
01:07 natarej__ joined #gluster
01:13 julim joined #gluster
01:15 PatNarciso any quick tricks for improving 'ls' xfs performance issues?
01:15 PatNarciso tricks, or direction...
01:20 corretico joined #gluster
01:32 jmarley joined #gluster
01:55 PatNarciso what defines the dir depth of the .glusterfs dir?  subvols-per-directory?
02:07 theron joined #gluster
02:08 PatNarciso nah.  not subvols-pre-directory.
02:14 kanagaraj joined #gluster
02:23 smohan joined #gluster
02:33 Peppaq joined #gluster
02:35 pppp joined #gluster
02:35 bharata-rao joined #gluster
02:38 atalur joined #gluster
02:41 corretico joined #gluster
02:42 arao joined #gluster
02:42 plarsen joined #gluster
02:57 atalur joined #gluster
03:03 plarsen joined #gluster
03:03 plarsen joined #gluster
03:08 vmallika joined #gluster
03:12 victori joined #gluster
03:18 arao joined #gluster
03:26 sripathi joined #gluster
03:31 nangthang joined #gluster
03:32 atalur joined #gluster
03:39 [7] joined #gluster
03:41 vmallika joined #gluster
03:45 gem joined #gluster
03:45 plarsen joined #gluster
03:55 sakshi joined #gluster
03:59 anrao joined #gluster
04:04 plarsen joined #gluster
04:04 overclk joined #gluster
04:05 corretico joined #gluster
04:06 Manikandan joined #gluster
04:06 Manikandan_ joined #gluster
04:07 nbalacha joined #gluster
04:10 victori joined #gluster
04:12 overclk joined #gluster
04:13 rjoseph joined #gluster
04:14 victori joined #gluster
04:23 plarsen joined #gluster
04:24 uebera|| joined #gluster
04:33 victori joined #gluster
04:37 vimal joined #gluster
04:44 glusterbot News from newglusterbugs: [Bug 1235904] fgetxattr() crashes when key name is NULL <https://bugzilla.redhat.com/show_bug.cgi?id=1235904>
04:44 glusterbot News from resolvedglusterbugs: [Bug 1126801] glusterfs logrotate config file pollutes global config <https://bugzilla.redhat.com/show_bug.cgi?id=1126801>
04:45 rafi joined #gluster
04:48 sripathi joined #gluster
04:59 ndarshan joined #gluster
05:09 atalur joined #gluster
05:10 Manikandan joined #gluster
05:11 vmallika joined #gluster
05:15 plarsen joined #gluster
05:20 jiffin joined #gluster
05:22 arao joined #gluster
05:23 plarsen joined #gluster
05:24 ashiq joined #gluster
05:28 kshlm joined #gluster
05:28 jiffin1 joined #gluster
05:29 anrao joined #gluster
05:30 soumya_ joined #gluster
05:33 plarsen joined #gluster
05:36 Bhaskarakiran joined #gluster
05:38 ppai joined #gluster
05:41 kaushal_ joined #gluster
05:46 prg3 joined #gluster
05:50 maveric_amitc_ joined #gluster
05:53 kdhananjay joined #gluster
05:54 atalur joined #gluster
05:54 prg3 joined #gluster
06:03 soumya_ joined #gluster
06:03 hgowtham joined #gluster
06:03 raghu joined #gluster
06:06 prg3 joined #gluster
06:14 glusterbot News from newglusterbugs: [Bug 1235921] POSIX: brick logs filled with _gf_log_callingfn due to this==NULL in dict_get <https://bugzilla.redhat.com/show_bug.cgi?id=1235921>
06:14 glusterbot News from newglusterbugs: [Bug 1235923] POSIX: brick logs filled with _gf_log_callingfn due to this==NULL in dict_get <https://bugzilla.redhat.com/show_bug.cgi?id=1235923>
06:16 jvandewege joined #gluster
06:16 meghanam joined #gluster
06:18 kdhananjay1 joined #gluster
06:22 jtux joined #gluster
06:23 anil joined #gluster
06:25 Manikandan joined #gluster
06:26 spandit joined #gluster
06:33 overclk joined #gluster
06:42 kdhananjay joined #gluster
06:44 nangthang joined #gluster
06:44 glusterbot News from newglusterbugs: [Bug 1235928] memory corruption in the way we maintain migration information in inodes. <https://bugzilla.redhat.com/show_bug.cgi?id=1235928>
06:49 overclk joined #gluster
06:49 overclk joined #gluster
06:50 harish joined #gluster
06:51 arao joined #gluster
06:52 anrao joined #gluster
07:01 kotreshhr joined #gluster
07:12 RedW joined #gluster
07:13 rafi joined #gluster
07:14 glusterbot News from newglusterbugs: [Bug 1235939] Provide and use a common way to do reference counting of (internal) structures <https://bugzilla.redhat.com/show_bug.cgi?id=1235939>
07:22 R0ok_ joined #gluster
07:22 bharata_ joined #gluster
07:26 Trefex joined #gluster
07:26 meghanam left #gluster
07:26 meghanam joined #gluster
07:28 overclk joined #gluster
07:29 rafi1 joined #gluster
07:32 jiffin joined #gluster
07:33 maveric_amitc_ joined #gluster
07:37 auzty joined #gluster
07:38 auzty any ideas why gluster volume rebalance failed for localhost?
07:39 auzty [dht-rebalance.c:1539:gf_defrag_start_crawl] 0-myvol-dht: fix layout on / failed
07:45 elico joined #gluster
07:45 kevein joined #gluster
07:45 soumya_ joined #gluster
07:52 anrao joined #gluster
07:54 ctria joined #gluster
07:56 overclk joined #gluster
08:04 Debloper joined #gluster
08:07 harish joined #gluster
08:09 elico joined #gluster
08:11 overclk joined #gluster
08:12 ppai joined #gluster
08:13 elico joined #gluster
08:14 Slashman joined #gluster
08:28 atinm joined #gluster
08:43 Ulrar Is it possible to fine tune the placement of the different volumes ? The idea would be to have a few main servers with all the volumes on them, and add other servers with only one volume on them replicated to the main servers. Is that possible with glusterFS ?
08:43 sysconfig joined #gluster
08:45 glusterbot News from newglusterbugs: [Bug 1235964] Disperse volume: FUSE I/O error after self healing the disk failure files <https://bugzilla.redhat.com/show_bug.cgi?id=1235964>
08:45 glusterbot News from newglusterbugs: [Bug 1235966] [RHEV-RHGS] After self-heal operation, VM Image file loses the sparseness property <https://bugzilla.redhat.com/show_bug.cgi?id=1235966>
08:47 atinm joined #gluster
08:50 elico joined #gluster
09:04 nsoffer joined #gluster
09:09 husanu0 joined #gluster
09:15 anrao joined #gluster
09:16 corretico joined #gluster
09:23 husanu1 joined #gluster
09:29 al joined #gluster
09:33 nbalacha joined #gluster
09:35 Sjors joined #gluster
09:41 yoda1410 joined #gluster
09:42 ira joined #gluster
09:44 gildub joined #gluster
09:47 spalai joined #gluster
09:51 Skinny_ it shouldn't be too difficult to set up geo-replication right .. driving me crazy.. it always ends up in Faulty state and log files are not showing what exactly went wrong
09:55 atinm joined #gluster
09:56 yoda1410 ayo
09:56 yoda1410 I have a problem with one volume when I try to add on brick to my volume ...
09:57 yoda1410 http://pastebin.com/K9vhts29
09:57 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
09:59 yoda1410 and, when I use "gluster volume info", I don't see the brick lpr-nas01/bick-xiv2/pegasos_saving in any volume ...
10:00 Skinny_ did you use that brick in a previously used volume ?
10:00 SpComb^ yoda1410: you have a .glusterfs/ in lpr-nas01:/brick-xiv2/pegasos_saving/ ?
10:00 yoda1410 but, on the server, I have one directory /brick-xiv2/pegasos_saving ... I don't know what it's doing ...
10:00 yoda1410 yes, I have .glusterfs in this directory
10:00 Skinny_ https://joejulian.name/blog/glusterfs-path-or-a-prefix-of-it-is-already-part-of-a-volume/
10:01 autoditac joined #gluster
10:01 Skinny_ from one of my master nodes, the only relevant line I can find is : [2015-06-26 11:59:25.628027] I [monitor(monitor):282:monitor] Monitor: worker(/bricks/brick01) died in startup phase
10:01 yoda1410 thanks
10:01 autoditac joined #gluster
10:02 Skinny_ ssh works fine, keys are copied over correctly
10:03 SpComb^ yoda1410: yeah, note the xattrs as well if you're re-using something that was previously a brick, best just to nuke and recreate the entire brick directory
10:06 yoda1410 after change xtarr attribut et remove .glusterfs, I can remove entire folder ?
10:10 yoda1410 when I remove the folder lpr-nas01:/brick-xiv2/pegasos_saving glusterd is dead ...
10:10 yoda1410 after make change list in https://joejulian.name/blog/glusterfs-path-or-a-prefix-of-it-is-already-part-of-a-volume/
10:11 SpComb^ are you sure it's not actually in use? :P
10:11 SpComb^ careful
10:14 Skinny_ 26599 write(2, "[2015-06-26 12:05:12.342104] E [syncdutils:247:log_raise_exception] <top>: malformed path\n", 90 <unfinished ...>
10:15 Skinny_ hmm
10:17 yoda1410 yes I'm sure ... http://www.fpaste.org/236906/31381414/
10:17 yoda1410 I don't any reference to lpr-nas01:/brick-xiv2/pegasos_saving ...
10:17 yoda1410 only brick-xiv1 ...
10:18 SpComb^ how are they mounted
10:20 yoda1410 brick-xiv2 is a local drive
10:20 yoda1410 it's the same for brick-xiv1
10:21 SpComb^ sounds like you have something confusing going on
10:21 meghanam joined #gluster
10:30 lanning joined #gluster
10:33 overclk joined #gluster
10:33 The_Ball joined #gluster
10:34 The_Ball I'm using 3.7.2 and trying to migrate a brick on a node. I read that the new way of doing it is to add-brick then remove-brick, but I can't seem to make it work I just keep getting "Operation failed"
10:35 The_Ball I've been able to migrate one brick by using the deprecated replace-brick .... commit force
10:35 NTQ joined #gluster
10:35 The_Ball Then on the next scan the new brick was healed
10:36 The_Ball my version doesn't have replace brick .... start/stop/status etc, only commit force
10:36 jmarley joined #gluster
10:39 malevolent hi there
10:39 malevolent I'm having troubles to start glusterd at boot on centos7
10:39 malevolent it seems is trying to start if before network.service
10:40 malevolent the unit file says: After=network.target rpcbind.service
10:40 malevolent so seems to be ok
10:41 malevolent anoy had had the same problem?
10:42 malevolent anoyone*
10:43 vovcia nope centos7 boots ok
10:45 LebedevRI joined #gluster
10:46 malevolent ok, not here...
10:46 malevolent :(
10:49 gem joined #gluster
11:01 jmarley joined #gluster
11:06 kotreshhr joined #gluster
11:07 atinm joined #gluster
11:13 NTQ How can I bind glusterfs to a specific ip
11:15 glusterbot News from newglusterbugs: [Bug 1236019] peer probe results in Peer Rejected(Connected) <https://bugzilla.redhat.com/show_bug.cgi?id=1236019>
11:15 glusterbot News from newglusterbugs: [Bug 1236009] do an explicit lookup on the inodes linked in readdirp <https://bugzilla.redhat.com/show_bug.cgi?id=1236009>
11:18 rjoseph joined #gluster
11:20 kbyrne joined #gluster
11:21 kbyrne joined #gluster
11:24 gem joined #gluster
11:25 soumya_ joined #gluster
11:26 The_Ball joined #gluster
11:30 jmarley joined #gluster
11:31 mribeirodantas joined #gluster
11:31 harish_ joined #gluster
11:34 rafi joined #gluster
11:34 ppai joined #gluster
11:35 firemanxbr joined #gluster
11:41 rafi joined #gluster
11:43 ju5t joined #gluster
11:52 Manikandan joined #gluster
11:52 Manikandan_ joined #gluster
11:54 apahim joined #gluster
12:09 jtux joined #gluster
12:12 jcastill1 joined #gluster
12:15 glusterbot News from newglusterbugs: [Bug 1236032] Tiering: unlink failed with error "Invaid argument" <https://bugzilla.redhat.com/show_bug.cgi?id=1236032>
12:17 jcastillo joined #gluster
12:21 overclk joined #gluster
12:22 soumya joined #gluster
12:24 psilvao joined #gluster
12:25 P0w3r3d joined #gluster
12:25 psilvao Hi!, fast question there is some command or option for the management  inodes inside gluster volume?
12:33 kotreshhr left #gluster
12:42 rjoseph joined #gluster
12:42 mator psilvao, what is management inodes?
12:43 Trefex joined #gluster
12:43 psilvao Hi mator: our problem is 100% used  when you make a query with df -i
12:44 psilvao and .glusterfs is the guilty
12:45 psilvao for these reason, i think that perhaps exists a command for to free inodes in gluster or management it...
12:45 mator .glusterfs is magick no one wants to mess with it...
12:45 mator search mailing list archives for what you can do with .glusterfs
12:46 glusterbot News from newglusterbugs: [Bug 1236050] Disperse volume: fuse mount hung after self healing <https://bugzilla.redhat.com/show_bug.cgi?id=1236050>
12:47 julim joined #gluster
12:48 mator psilvao, for the begin -  https://joejulian.name/blog/what-is-this-new-glusterfs-directory-in-33/
12:50 ghenry joined #gluster
12:51 wkf joined #gluster
12:55 psilvao mator: what is the reason for 95% of the inode usage was assigned to .gluster?,  is normal?
12:57 mator psilvao, i don't know... :-/
12:58 psilvao mator: perhaps when you use the brick and not the brick mounted...
12:58 elico left #gluster
12:58 psilvao mator: our mistake was used the brick
12:58 mator https://disqus.com/home/discussion/joejulian/what_is_this_new_glusterfs_directory_in_33/
12:58 psilvao mator: but perhaps .gluster used 95% inodes...
12:59 mator what is .gluster? you mean .glusterfs ?
13:00 mator if i had problems with .glusterfs , i would ask in mailing list...
13:01 psilvao mator: reading https://joejulian.name/blog/what-is-this-new-glusterfs-directory-in-33/ , when we create a file there is some information for gluster, storaged in .gluster directory
13:02 mator i've no idea what is ".gluster" , by ^^ link it is probably called ".glusterfs"
13:02 bennyturns joined #gluster
13:03 bennyturns joined #gluster
13:07 psilvao mator: interesting.. see this cluster.min-free-inodes  http://www.gluster.org/community/documentation/index.php/Documenting_the_undocumented
13:14 Trefex1 joined #gluster
13:25 hamiller joined #gluster
13:26 dgandhi joined #gluster
13:26 aaronott joined #gluster
13:27 lkoranda_ joined #gluster
13:27 marbu joined #gluster
13:32 mbukatov joined #gluster
13:35 Trefex joined #gluster
13:37 lkoranda joined #gluster
13:42 shyam joined #gluster
13:46 gem joined #gluster
13:47 spalai joined #gluster
14:02 kdhananjay joined #gluster
14:06 chirino joined #gluster
14:15 harish_ joined #gluster
14:16 glusterbot News from resolvedglusterbugs: [Bug 1223286] [geo-rep]: worker died with "ESTALE" when performed rm -rf on a directory from mount of master volume <https://bugzilla.redhat.com/show_bug.cgi?id=1223286>
14:31 victori joined #gluster
14:40 cholcombe joined #gluster
14:41 lpabon joined #gluster
14:41 corretico joined #gluster
14:52 jmarley joined #gluster
15:04 bennyturns joined #gluster
15:08 victori joined #gluster
15:15 aaronott1 joined #gluster
15:23 ctria joined #gluster
15:28 rafi joined #gluster
15:38 jiffin joined #gluster
15:44 soumya joined #gluster
15:46 sysconfig joined #gluster
15:46 glusterbot News from newglusterbugs: [Bug 1236128] Quota list is not working on tiered volume. <https://bugzilla.redhat.com/show_bug.cgi?id=1236128>
15:46 glusterbot News from resolvedglusterbugs: [Bug 1194753] Storage tier feature <https://bugzilla.redhat.com/show_bug.cgi?id=1194753>
15:49 squizzi_ joined #gluster
15:49 nbalacha joined #gluster
15:53 cholcombe gluster: I just submitted a juju charm to put together cluster's quickly.  It supports every volume type that gluster does: https://bugs.launchpad.net/charms/+bug/1469213
15:53 cholcombe i'll be adding in rhel support shortly
15:54 ndevos cholcombe: nice, you should announce that on gluster-users@gluster.org and do a blog post about it, many ubuntu users would like to get more details
15:55 cholcombe ndevos: will do :)
15:55 cholcombe I'm on the list
15:55 ndevos cholcombe: thanks!
15:59 TheCthulhu joined #gluster
16:09 maveric_amitc_ joined #gluster
16:11 ctria joined #gluster
16:15 calavera joined #gluster
16:23 Agoldman joined #gluster
16:25 Rapture joined #gluster
16:26 Agoldman Hello all, have some weird behavior going on here, I'm replacing a brick that is holding the storage for a vm, and the gluster volume heal $VOL full won't copy the disk image. It will populate any other items in the volume.
16:26 Agoldman any help would be appreciated.
16:27 sysconfig joined #gluster
16:38 calavera joined #gluster
16:42 meghanam joined #gluster
16:47 vmallika joined #gluster
16:49 NTQ1 joined #gluster
16:55 cholcombe joined #gluster
17:11 spalai joined #gluster
17:20 Peppard joined #gluster
17:31 rafi joined #gluster
17:34 Rapture joined #gluster
17:39 meghanam joined #gluster
17:40 apahim joined #gluster
17:44 jiffin joined #gluster
17:45 dgbaley joined #gluster
17:47 dgbaley I just had a splitbrain of 2 files. I used splitmount and removed all but one copy of each of the files. After running heal, nothing seems to happen, there's still only a single copy of each file.
17:49 meghanam joined #gluster
17:52 dgbaley The two files are listed in info... no errors in logs... and network is quiet
17:56 aaronott joined #gluster
18:02 dgbaley self heal daemons are running and show up in volume status, no tasks running
18:07 victori joined #gluster
18:13 rotbeard joined #gluster
18:19 calavera joined #gluster
18:30 mribeirodantas joined #gluster
18:35 hagarth joined #gluster
18:37 jmarley joined #gluster
18:42 dgbaley Looks like through a combination of restarting brick daemons, touching the files in question, and reissuing heal full's, things have settled.
18:43 calavera joined #gluster
19:07 atrius joined #gluster
19:09 nsoffer joined #gluster
19:24 RedW joined #gluster
19:30 rwheeler joined #gluster
19:34 marcoceppi joined #gluster
19:34 marcoceppi joined #gluster
20:01 calavera joined #gluster
20:13 jmarley joined #gluster
20:14 atrius joined #gluster
20:27 DV joined #gluster
20:27 DV__ joined #gluster
20:45 DV joined #gluster
20:45 DV__ joined #gluster
20:54 jmarley joined #gluster
21:11 DV__ joined #gluster
21:38 Rapture joined #gluster
21:42 DV joined #gluster
21:48 DV__ joined #gluster
22:02 calavera joined #gluster
22:04 wkf joined #gluster
22:28 DV__ joined #gluster
22:31 mckaymatt joined #gluster
22:59 jmarley joined #gluster
23:51 hagarth joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary