Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-05-29

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:12 chirino joined #gluster
00:14 JoeJulian dgbaley: whichever you used when defining the volume.
00:15 JoeJulian @targeted rebalance
00:15 JoeJulian damn
00:17 JoeJulian @targeted fix-layout
00:17 glusterbot JoeJulian: You can trigger a fix-layout for a single directory by setting the extended attribute trusted.distribute.fix.layout to any value for that directory. This is done through a fuse client mount. This does not move any files, just fixes the dht hash mapping.
00:18 dgbaley JoeJulian: Ah, that's good. But my first peer is still showing as an IP address in peer info from the other servers even though I tried to probe in the other direction (as I read in the docs would convert the IP to a hostname)
00:19 JoeJulian Hmm, that's odd
00:19 jayunit1000 joined #gluster
00:19 JoeJulian check in /var/lib/glusterd/peers
00:19 JoeJulian maybe there's a clue in there.
00:21 dgbaley Ah. hostname1=<IP> hostname2=<name>
00:21 dgbaley Do clients get all of those hostnames?
00:22 JoeJulian clients just get whatever the brick is named.
00:22 dgbaley Ah, perfect (for me at least)
00:22 JoeJulian Which is useful, for instance if you use hostnames, you can use split-horizon dns to have the servers use a different network from the clients by resolving the hostnames differently.
00:45 JoeJulian @change "targeted fix-layout" "s/trusted.distribute/distribute/"
00:45 glusterbot JoeJulian: Error: The command "change" is available in the Factoids, Herald, and Topic plugins.  Please specify the plugin whose command you wish to call by using its name as a command before "change".
00:45 JoeJulian @factoids change "targeted fix-layout" "s/trusted.distribute/distribute/"
00:45 glusterbot JoeJulian: Error: u's/trusted.distribute/distribute/' is not a valid key id.
00:45 JoeJulian @factoids change "targeted fix-layout" 1 "s/trusted.distribute/distribute/"
00:45 glusterbot JoeJulian: The operation succeeded.
00:45 JoeJulian @targeted fix-layout
00:45 glusterbot JoeJulian: You can trigger a fix-layout for a single directory by setting the extended attribute distribute.fix.layout to any value for that directory. This is done through a fuse client mount. This does not move any files, just fixes the dht hash mapping.
01:18 rotbeard joined #gluster
01:19 aaronott joined #gluster
01:33 bharata-rao joined #gluster
01:39 David_H_Smith joined #gluster
01:45 hagarth joined #gluster
01:55 prilly joined #gluster
01:56 nangthang joined #gluster
02:15 ira joined #gluster
02:24 gildub joined #gluster
02:25 harish__ joined #gluster
02:26 DV joined #gluster
02:29 kdhananjay joined #gluster
02:30 harish__ joined #gluster
02:43 MagicDice joined #gluster
03:05 DV_ joined #gluster
03:13 [7] joined #gluster
03:15 overclk joined #gluster
03:56 sakshi joined #gluster
03:57 itisravi joined #gluster
04:03 spandit joined #gluster
04:17 spalai joined #gluster
04:33 yazhini joined #gluster
04:51 rjoseph joined #gluster
04:55 Triveni joined #gluster
05:00 gildub joined #gluster
05:00 ndarshan joined #gluster
05:09 hgowtham joined #gluster
05:10 ndarshan joined #gluster
05:13 ashiq joined #gluster
05:14 Apeksha joined #gluster
05:16 gem joined #gluster
05:22 deniszh joined #gluster
05:22 Bhaskarakiran joined #gluster
05:24 ckotil joined #gluster
05:25 karnan joined #gluster
05:29 Manikandan joined #gluster
05:31 rafi joined #gluster
05:33 glusterbot News from newglusterbugs: [Bug 1226139] Implement MKNOD fop in bit-rot. <https://bugzilla.redhat.com/show_bug.cgi?id=1226139>
05:34 kdhananjay joined #gluster
05:39 deniszh joined #gluster
05:40 Triveni_ joined #gluster
05:52 gem joined #gluster
05:53 spalai joined #gluster
05:56 anil joined #gluster
06:01 Philambdo joined #gluster
06:05 Bhaskarakiran joined #gluster
06:18 ashiq joined #gluster
06:19 jtux joined #gluster
06:22 vimal joined #gluster
06:29 atalur joined #gluster
06:36 spalai joined #gluster
06:36 liquidat joined #gluster
06:47 TvL2386 joined #gluster
06:51 arcolife joined #gluster
06:58 [Enrico] joined #gluster
07:01 al joined #gluster
07:07 kshlm joined #gluster
07:10 David_H_Smith joined #gluster
07:22 Trefex joined #gluster
07:35 fsimonce joined #gluster
07:38 ctria joined #gluster
07:46 [Enrico] joined #gluster
07:49 s19n joined #gluster
07:53 anrao joined #gluster
07:55 ninkotech__ joined #gluster
08:07 sas_ joined #gluster
08:12 DV joined #gluster
08:24 DV_ joined #gluster
08:34 glusterbot News from newglusterbugs: [Bug 1226207] [FEAT] directory level snapshot create <https://bugzilla.redhat.com/show_bug.cgi?id=1226207>
08:34 glusterbot News from newglusterbugs: [Bug 1226210] [FEAT] directory level snapshot clone <https://bugzilla.redhat.com/show_bug.cgi?id=1226210>
08:59 aravindavk joined #gluster
09:04 glusterbot News from newglusterbugs: [Bug 1226217] [Backup]: Unable to create a glusterfind session <https://bugzilla.redhat.com/show_bug.cgi?id=1226217>
09:04 glusterbot News from newglusterbugs: [Bug 1226220] [FEAT] directory level SSL/TLS auth <https://bugzilla.redhat.com/show_bug.cgi?id=1226220>
09:04 glusterbot News from newglusterbugs: [Bug 1226223] Mount broker user add command removes existing volume for a mountbroker user when second volume is attached to same user <https://bugzilla.redhat.com/show_bug.cgi?id=1226223>
09:04 glusterbot News from newglusterbugs: [Bug 1226225] [FEAT] volume size query support <https://bugzilla.redhat.com/show_bug.cgi?id=1226225>
09:04 glusterbot News from newglusterbugs: [Bug 1226213] snap_scheduler script must be usable as python module. <https://bugzilla.redhat.com/show_bug.cgi?id=1226213>
09:04 glusterbot News from newglusterbugs: [Bug 1226224] [RFE] Quota: Make "quota-deem-statfs" option ON, by default, when quota is enabled. <https://bugzilla.redhat.com/show_bug.cgi?id=1226224>
09:23 arcolife joined #gluster
09:44 glusterbot News from resolvedglusterbugs: [Bug 1226217] [Backup]: Unable to create a glusterfind session <https://bugzilla.redhat.com/show_bug.cgi?id=1226217>
09:48 arcolife joined #gluster
10:04 glusterbot News from newglusterbugs: [Bug 1226254] Glusterd crash <https://bugzilla.redhat.com/show_bug.cgi?id=1226254>
10:14 glusterbot News from resolvedglusterbugs: [Bug 1191486] daemons abstraction & refactoring <https://bugzilla.redhat.com/show_bug.cgi?id=1191486>
10:17 Norky joined #gluster
10:28 poornimag joined #gluster
10:29 ppai joined #gluster
10:34 glusterbot News from newglusterbugs: [Bug 1226255] Undefined symbol "changelog_select_event" <https://bugzilla.redhat.com/show_bug.cgi?id=1226255>
10:37 rafi1 joined #gluster
10:43 prilly_ joined #gluster
10:48 nsoffer joined #gluster
10:50 dusmant joined #gluster
11:03 poornimag joined #gluster
11:04 glusterbot News from newglusterbugs: [Bug 1226272] Volume heal info not reporting files in split brain and core dumping, after upgrading to 3.7.0 <https://bugzilla.redhat.com/show_bug.cgi?id=1226272>
11:07 kanagaraj joined #gluster
11:08 hagarth joined #gluster
11:23 rafi1 joined #gluster
11:25 ira joined #gluster
11:35 glusterbot News from newglusterbugs: [Bug 1226276] Volume heal info not reporting files in split brain and core dumping, after upgrading to 3.7.0 <https://bugzilla.redhat.com/show_bug.cgi?id=1226276>
11:35 glusterbot News from newglusterbugs: [Bug 1226279] GF_CONTENT_KEY should not be handled unless we are sure no other operations are in progress <https://bugzilla.redhat.com/show_bug.cgi?id=1226279>
11:43 dusmant joined #gluster
11:44 bharata_ joined #gluster
11:48 prilly_ joined #gluster
11:50 [7] how does glusterfs decide what gets stored where?
11:51 [7] IIUC adding a brick to an existing distributed volume will make some (or even most?) files on that volume inaccessible until it is rebalanced
11:52 [7] can that be avoided somehow? e.g. by first populating the new brick with the required data, then adding it to the volume, and then removing the relocated data from the old bricks?
11:53 [7] also, how much data does such a rebalance move? does it only move data from old bricks to the new one, or does every brick get a completely new set of data, effectively moving most of the contents of the whole vulme?
11:56 [7] the docs seem to contradict each other...
12:00 Slashman joined #gluster
12:00 [Enrico] joined #gluster
12:03 autoditac joined #gluster
12:12 Trefex1 joined #gluster
12:14 rjoseph joined #gluster
12:15 glusterbot News from resolvedglusterbugs: [Bug 1075611] [FEAT] log: enhance gluster log format with message ID and standardize errno reporting <https://bugzilla.redhat.com/show_bug.cgi?id=1075611>
12:18 plarsen joined #gluster
12:36 B21956 joined #gluster
12:39 B21956 joined #gluster
12:46 rafi joined #gluster
12:46 ira joined #gluster
12:52 arcolife joined #gluster
12:54 wkf joined #gluster
12:54 Leildin [7], The rebalance levels data across all bricks
12:55 ProT-0-TypE joined #gluster
12:55 Leildin I added two bricks to a 4 bricks distributed and rebalanced, the 4 bricks went from 95, 89, 91, 90 to 78 78 79 78 %
12:56 Leildin and I saw data transferred everywhere where it fit best
13:00 [7] Leildin: IIUC it will not be transferred to "where free space is", but to "where it belongs hash-wise"
13:01 [7] and that would likely move around nearly every piece of data
13:01 [7] i.e. the 4 "old" bricks would also be moving around massive amounts  of data between them
13:02 Leildin exactly :) it shifted stuff everywhere and leveled all bricks the same, it was beautiful
13:02 spalai joined #gluster
13:02 [7] now imagine that cluster would have 100 bricks of 1TB each ;)
13:02 [7] poor network...
13:07 DV joined #gluster
13:10 LebedevRI joined #gluster
13:15 shubhendu joined #gluster
13:16 ProT-0-TypE joined #gluster
13:16 anrao_ joined #gluster
13:18 bturner_ joined #gluster
13:24 Twistedgrim joined #gluster
13:25 dgandhi joined #gluster
13:25 atinmu joined #gluster
13:25 kkeithley joined #gluster
13:28 georgeh-LT2 joined #gluster
13:33 prilly_ joined #gluster
13:34 DV_ joined #gluster
13:39 chudler joined #gluster
13:56 gem joined #gluster
14:02 harish__ joined #gluster
14:04 s19n left #gluster
14:08 dabear joined #gluster
14:08 dabear Hi people
14:09 dabear I've done some reading on glusterfs and I am planning on implementing a replicated gluster on two nodes
14:09 dabear I will be using this between two raspberry pis both with an usb 3 connected external drive
14:10 soumya joined #gluster
14:11 dabear I am wondering about the concept of bricks. If I have an ext3 formatted external drive that is half full, can I put a brick there?
14:13 dabear If so, if in the future I remove some or most of the other files present on the drive, will the brick auto expand, or would it be limited to half the size of the drive still?
14:21 dusmant joined #gluster
14:22 vovcia dabear: brick is just a directory on filesystem, so yes, You can put a brick on rootfs - but its not recommended
14:23 dabear ok, thanks
14:23 vovcia dabear: free space is computer from underlying filesystem so it will work ok :)
14:23 vovcia *computed
14:23 dabear so why is it not recommended to put the brick directly on a drive?
14:24 dabear and what's the recommended way?
14:24 vovcia recommended is dedicated filesystem only for gluster
14:25 dabear ok thanks :)
14:25 wushudoin joined #gluster
14:26 vimal joined #gluster
14:29 coredump joined #gluster
14:31 aaronott joined #gluster
14:34 vimal joined #gluster
14:36 atalur joined #gluster
14:39 gem joined #gluster
14:42 Norky joined #gluster
14:43 spalai joined #gluster
14:47 chirino joined #gluster
14:49 ProT-0-TypE joined #gluster
14:50 premera joined #gluster
14:54 hgowtham joined #gluster
14:58 Trefex joined #gluster
15:00 wushudoin| joined #gluster
15:03 SnoWolfe joined #gluster
15:04 SnoWolfe quick question... just started using gluster... when doing "volume top gv0 read-perf" i get a list with a lot of 0's - does this mean anything?
15:09 autoditac joined #gluster
15:11 atalur joined #gluster
15:12 elico joined #gluster
15:13 cholcombe joined #gluster
15:14 gem joined #gluster
15:17 autoditac joined #gluster
15:18 wushudoin| joined #gluster
15:33 MagicDice joined #gluster
15:51 aravindavk joined #gluster
15:54 ndevos bturner_: is that ^ something you would know about?
16:01 SnoWolfe asking again in case was missed - when doing "volume top gv0 read-perf" i get a list with a lot of 0's - does this mean anything?
16:01 SnoWolfe example https://www.filepicker.io/api/file/NrVcGBaWSUKlIN45YXNA
16:02 [o__o] joined #gluster
16:08 kshlm joined #gluster
16:09 overclk joined #gluster
16:15 ira joined #gluster
16:28 rafi joined #gluster
16:29 Trefex1 joined #gluster
16:32 rafi joined #gluster
16:33 Leildin joined #gluster
16:35 rafi joined #gluster
16:41 MagicDice left #gluster
16:44 rafi joined #gluster
16:46 rafi joined #gluster
16:48 coredump joined #gluster
16:50 rafi joined #gluster
16:54 wushudoin| joined #gluster
16:59 wushudoin| joined #gluster
17:00 bene2 joined #gluster
17:02 hagarth joined #gluster
17:11 jbautista- joined #gluster
17:15 Gill joined #gluster
17:16 jbautista- joined #gluster
17:26 spalai joined #gluster
17:31 Rapture joined #gluster
17:33 squizzi joined #gluster
17:41 bennyturns joined #gluster
17:50 jmarley joined #gluster
17:56 stickyboy joined #gluster
18:13 gem joined #gluster
18:31 mbukatov joined #gluster
18:36 Gill joined #gluster
18:44 mcpierce joined #gluster
19:47 Peppard joined #gluster
20:03 ChrisHolcombe joined #gluster
20:11 DV joined #gluster
20:35 CyrilPeponnet plop guys, long time no see :)
20:36 CyrilPeponnet I try to narrow down a performance issue here
20:37 Gill_ joined #gluster
20:37 CyrilPeponnet I have the glusterfsd process used for gluster/nfs vol file id which spiking every 10s to 700% CPU and then go back to normal. The point is during this spike, every clients hangs
20:38 CyrilPeponnet usually rebootin the node fix the thing for few days
21:12 cholcombe joined #gluster
21:16 elico joined #gluster
21:18 nsoffer joined #gluster
21:23 rotbeard joined #gluster
21:48 prilly_ joined #gluster
22:19 Prilly joined #gluster
22:23 plarsen joined #gluster
22:25 ctria joined #gluster
22:38 CyrilPeponnet joined #gluster
22:39 CyrilPeponnet joined #gluster
22:39 CyrilPeponnet joined #gluster
22:40 CyrilPeponnet joined #gluster
22:40 CyrilPeponnet joined #gluster
22:40 CyrilPeponnet joined #gluster
22:41 CyrilPeponnet joined #gluster
22:41 elico joined #gluster
22:41 CP|AFK joined #gluster
23:09 prilly_ joined #gluster
23:38 dgandhi joined #gluster
23:50 cuqa joined #gluster
23:50 cuqa hello, I currently have a split brain situation and have an odd problem right now
23:51 cuqa I mounted the volume with splitmount and renamed a folder
23:51 cuqa this did just create an identical folder which cannot be deleted anymore
23:51 cuqa drwxr-xr-x 2 www-data www-data 12619776 Mai 30 01:17 session
23:51 cuqa d????????? ? ?        ?               ?            ? _session
23:53 cuqa /mnt/vol-www/php5 $ rm _session  -f
23:53 cuqa rm: cannot remove `_session': Device or resource busy

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary