Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2013-12-27

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:15 Staples84 joined #gluster
00:56 harish joined #gluster
01:04 jporterfield joined #gluster
01:09 jporterfield joined #gluster
01:12 vpshastry joined #gluster
01:14 jbautista- joined #gluster
01:33 jporterfield joined #gluster
01:39 jporterfield joined #gluster
01:50 stickyboy joined #gluster
01:50 stickyboy joined #gluster
01:51 zapotah joined #gluster
01:51 zapotah joined #gluster
02:04 jporterfield joined #gluster
02:12 psyl0n joined #gluster
02:12 zapotah joined #gluster
02:12 zapotah joined #gluster
02:20 jporterfield joined #gluster
02:23 zapotah joined #gluster
02:39 jporterfield joined #gluster
02:44 jporterfield joined #gluster
02:57 jporterfield joined #gluster
03:04 jporterfield joined #gluster
03:32 harish joined #gluster
03:45 RameshN joined #gluster
03:45 jporterfield joined #gluster
03:48 RameshN joined #gluster
03:50 ajha joined #gluster
03:54 itisravi joined #gluster
03:54 davinder joined #gluster
04:02 shubhendu joined #gluster
04:05 jporterfield joined #gluster
04:10 jporterfield joined #gluster
04:16 jporterfield joined #gluster
04:19 bala joined #gluster
04:25 GLHMarmo1 left #gluster
04:29 marcoceppi joined #gluster
04:29 nage joined #gluster
04:29 JordanHackworth joined #gluster
04:29 marcoceppi joined #gluster
04:29 nage joined #gluster
04:30 Shdwdrgn joined #gluster
04:31 mohankumar joined #gluster
04:34 kshlm joined #gluster
04:41 bala joined #gluster
04:41 spandit joined #gluster
04:42 itisravi cmd: bug
04:42 itisravi cmd: cmd
04:42 itisravi cmd: fortune
04:42 itisravi cmd: fortunes.dat
04:42 itisravi cmd: gauge
04:42 itisravi cmd: mail
04:42 itisravi cmd: media
04:42 itisravi cmd: sayclip
04:42 itisravi cmd: sysinfo
04:42 itisravi cmd: tinyurl
04:42 itisravi cmd: uptime
04:42 itisravi cmd: /usr/share/kde4/apps/konversation/scripts
04:45 askb joined #gluster
04:54 shubhendu joined #gluster
04:57 MiteshShah joined #gluster
04:58 ppai joined #gluster
04:59 ndarshan joined #gluster
05:01 hagarth joined #gluster
05:01 bala joined #gluster
05:18 psharma joined #gluster
05:19 ababu joined #gluster
05:19 TonySplitBrain joined #gluster
05:21 vpshastry joined #gluster
05:28 kshlm joined #gluster
05:28 jporterfield joined #gluster
05:35 dusmant joined #gluster
05:42 ndarshan joined #gluster
05:42 ajha joined #gluster
05:43 jporterfield joined #gluster
05:46 saurabh joined #gluster
05:49 flakrat joined #gluster
06:00 lalatenduM joined #gluster
06:01 bala joined #gluster
06:02 hflai joined #gluster
06:02 eclectic joined #gluster
06:03 kanagaraj joined #gluster
06:04 Amanda joined #gluster
06:07 wgao__ joined #gluster
06:11 kanagaraj joined #gluster
06:17 nshaikh joined #gluster
06:20 KORG joined #gluster
06:20 ndarshan joined #gluster
06:22 dusmant joined #gluster
06:22 bala joined #gluster
06:23 shubhendu joined #gluster
06:26 kanagaraj joined #gluster
06:26 satheesh joined #gluster
06:26 RameshN joined #gluster
06:33 rastar joined #gluster
06:41 jporterfield joined #gluster
06:49 ngoswami joined #gluster
06:50 XATRIX joined #gluster
06:50 jporterfield joined #gluster
06:57 mohankumar joined #gluster
06:57 CheRi joined #gluster
06:59 bala joined #gluster
07:09 vimal joined #gluster
07:11 ngoswami joined #gluster
07:15 ninkotech_ joined #gluster
07:15 bala joined #gluster
07:20 jtux joined #gluster
07:25 ndarshan joined #gluster
07:29 shubhendu joined #gluster
07:30 RameshN joined #gluster
07:30 kanagaraj joined #gluster
07:34 techminer2 joined #gluster
07:35 blook joined #gluster
07:35 itisravi_ joined #gluster
07:36 osiekhan1 joined #gluster
07:43 ppai joined #gluster
07:43 ndarshan joined #gluster
07:48 jporterfield joined #gluster
07:49 ekuric joined #gluster
07:49 harish joined #gluster
07:49 askb joined #gluster
07:49 aravindavk joined #gluster
08:05 jiphex joined #gluster
08:05 al joined #gluster
08:11 ctria joined #gluster
08:21 MiteshShah joined #gluster
08:31 AndreyGrebenniko joined #gluster
08:34 ricky-ti1 joined #gluster
08:40 andreask joined #gluster
08:49 tjikkun_work joined #gluster
08:50 bala joined #gluster
08:56 MiteshShah joined #gluster
09:04 psharma joined #gluster
09:09 ozux joined #gluster
09:23 ngoswami joined #gluster
09:36 MiteshShah joined #gluster
09:36 psharma joined #gluster
09:37 nage joined #gluster
09:37 nage joined #gluster
09:39 pk joined #gluster
09:41 ekuric1 joined #gluster
09:48 ekuric joined #gluster
09:51 ekuric joined #gluster
09:53 XATRIX Hi, do i have to setup resource in pacemaker to mount glusterfs  after a system start ?
09:53 XATRIX Or it should be mounted via system startup scripts
09:56 pk left #gluster
10:03 mgebbe_ joined #gluster
10:06 aravindavk joined #gluster
10:09 psharma joined #gluster
10:14 tziOm joined #gluster
10:20 hagarth joined #gluster
10:24 Staples84 joined #gluster
10:28 rastar joined #gluster
10:40 aravindavk joined #gluster
10:56 diegows joined #gluster
11:04 XATRIX Guys, how can i make an fstab entry ?
11:04 XATRIX loacalhost:/datastorage /mnt    glusterfs       defaults,_netdev        0       0
11:04 XATRIX Doesn't mount it
11:04 XATRIX If i do 'mount.gluster localhost:/datastorage /mnt' - it goes ok
11:06 ekuric joined #gluster
11:10 ekuric joined #gluster
11:11 dhyan joined #gluster
11:17 blook joined #gluster
11:19 rjoseph joined #gluster
11:32 psyl0n joined #gluster
11:32 dusmant joined #gluster
11:34 rotbeard joined #gluster
11:37 kshlm joined #gluster
11:47 blook joined #gluster
12:02 itisravi_ joined #gluster
12:07 andreask joined #gluster
12:10 jporterfield joined #gluster
12:18 kshlm joined #gluster
12:24 dhyan joined #gluster
12:26 jporterfield joined #gluster
12:29 glusterbot New news from resolvedglusterbugs: [Bug 764966] gerrit integration fixes <https://bugzilla.redhat.com/show_bug.cgi?id=764966>
12:30 XATRIX loacalhost:/datastorage /mnt    glusterfs       defaults,_netdev        0       0
12:30 XATRIX Doesn't mount it
12:30 XATRIX If i do 'mount.gluster localhost:/datastorage /mnt' - it goes ok
12:30 XATRIX what's wrong with my fstab string ?
12:38 dhyan joined #gluster
12:40 blook joined #gluster
12:41 tjikkun_work joined #gluster
12:42 tjikkun_work joined #gluster
12:44 ndarshan joined #gluster
12:45 hybrid512 Hi everyone
12:45 hybrid512 question : is BD XLATOR included in 3.4 ?
12:46 hybrid512 because I tried to follow this example : http://raobharata.wordpress.com/2013/11/27/glusterfs-block-device-translator/
12:46 hybrid512 but I don't get the same results
12:49 hybrid512 (I don't see Xlator 1: BD
12:49 hybrid512 Capability 1: offload_copy
12:49 hybrid512 Capability 2: offload_snapshot while doing volume info)
12:55 jporterfield joined #gluster
12:57 hybrid512 anyone already played with BD Xlator ?
13:00 dhyan joined #gluster
13:24 ctria joined #gluster
13:26 jporterfield joined #gluster
13:29 rastar joined #gluster
13:34 sroy_ joined #gluster
13:37 kshlm joined #gluster
13:38 sroy joined #gluster
13:39 harish joined #gluster
13:44 TDJACR joined #gluster
13:50 blook joined #gluster
13:58 ErikEngerd joined #gluster
13:58 ErikEngerd I would like to use an alternative port in an SSH url for geo-replication but I cannot find the correct syntax.
13:59 ErikEngerd If I use    ssh://host:port:/path/to/remote then still it tries to use the default port 22
13:59 ErikEngerd of course, host:port should be  host: port
13:59 ErikEngerd (without the space in between)
14:00 brimstone left #gluster
14:18 vpshastry left #gluster
14:20 robo joined #gluster
14:23 theron joined #gluster
14:29 RameshN joined #gluster
14:32 Alpinist joined #gluster
14:34 johnmilton joined #gluster
14:38 Staples84 joined #gluster
14:43 ErikEngerd How does one stop a fault geo-replication volume?
14:43 ErikEngerd Just doing gluster volume geo-replication MASTER SLAVE stop
14:43 ErikEngerd does not work.
14:57 dbruhn joined #gluster
14:57 vpshastry joined #gluster
15:01 RedShift joined #gluster
15:10 zerick joined #gluster
15:18 hagarth joined #gluster
15:26 jobewan joined #gluster
15:27 ababu joined #gluster
15:29 sroy joined #gluster
16:01 ekuric left #gluster
16:17 jag3773 joined #gluster
16:19 semiosis :O
16:20 bala joined #gluster
16:22 psyl0n joined #gluster
16:33 sroy joined #gluster
16:39 LoudNoises joined #gluster
17:06 vpshastry joined #gluster
17:27 Mo__ joined #gluster
17:46 blook joined #gluster
17:48 davidbierce joined #gluster
17:54 vpshastry left #gluster
18:10 vpshastry joined #gluster
18:13 vpshastry left #gluster
18:15 blook joined #gluster
18:23 ajha joined #gluster
18:52 blook joined #gluster
19:12 getup- joined #gluster
19:14 pk1 joined #gluster
19:16 pk1 left #gluster
19:26 Staples84 joined #gluster
19:50 blook joined #gluster
20:53 psyl0n joined #gluster
21:01 zerick joined #gluster
21:02 flrichar joined #gluster
21:23 ctria joined #gluster
21:32 ^rooker joined #gluster
21:33 ^rooker joined #gluster
21:35 elyograg joined #gluster
21:35 zapotah joined #gluster
21:35 ^rooker Hello. I've setup my first gluster volume with 2 bricks on the same node in distributed mode. Strangely, though all files are only created on brick1 and brick2 only contains empty folders. I would be grateful if someone could help me with this. Thanks :)
21:42 ^rooker Is there a way I can see the values of the currently active volume options? I've found how to set them, but not how to read them out.
21:52 elyograg ^rooker: gluster volume info <volname>
21:54 ^rooker Thanks elyograg. I did already run volume info, but it's only very little information, like this:
21:54 ^rooker Volume Name: kitschdata
21:54 ^rooker Type: Distribute
21:54 ^rooker Status: Started
21:54 ^rooker Number of Bricks: 2
21:54 ^rooker Transport-type: tcp
21:54 ^rooker Bricks:
21:54 ^rooker Brick1: rasputin:/media/disk1/brick
21:54 ^rooker Brick2: rasputin:/media/disk2/brick
21:54 ^rooker oops. sorry.
21:56 elyograg you can see the options at the end: http://fpaste.org/64596/13881813/
21:56 glusterbot Title: #64596 Fedora Project Pastebin (at fpaste.org)
21:57 ^rooker Hm... Seems like only reconfigured options are displayed. Since I'm running on default values, none are displayed. Is there a way to see gluster's view off the free diskspace?
21:58 elyograg mount the volume and do 'df'
22:00 ^rooker Thanks. Can I use getfattr to see gluster's attributes on the files/folders?
22:00 ^rooker I've tried "getfattr -d" on the individual bricks' files, but the result was empty.
22:02 JoeJulian "gluster volume set help" to see the options and their defaults.
22:02 JoeJulian ~extended attributes | ^rooker
22:02 glusterbot ^rooker: (#1) To read the extended attributes on the server: getfattr -m .  -d -e hex {filename}, or (#2) For more information on how GlusterFS uses extended attributes, see this article: http://hekafs.org/index.php/2011/04/glusterfs-extended-attributes/
22:05 ^rooker JoeJulian: Thanks! Got it. Unfortunately, it doesn't seem to display all volume options. For example "sudo gluster volume set help | grep free" does not return "cluster.min-free-disk" :(
22:06 JoeJulian It does here...
22:06 JoeJulian 3.4.1
22:07 davidbierce joined #gluster
22:08 elyograg looks like it defaults to 10%. http://support.zresearch.com/community/documentation/index.php?title=Gluster_3.2:_Tuning_Volume_Options&amp;redirect=no  (at least as of 3.2)
22:08 ^rooker @glusterbot: Thanks. Got results now. Is it normal that only the first brick has "trusted.glusterfs.dht" set?
22:08 JoeJulian That's why my "gluster volume set help" shows as well.
22:10 JoeJulian Nope, not normal.
22:11 JoeJulian Did you start with one brick and add-brick to get the second?
22:11 ^rooker Nope. I've followed the tutorial on gluster.org and added both bricks in a single "gluster volume create" statement.
22:12 JoeJulian Are you using the ,,(latest) version?
22:12 glusterbot The latest version is available at http://download.gluster.org/pub/gluster/glusterfs/LATEST/ . There is a .repo file for yum or see @ppa for ubuntu.
22:13 ^rooker not using the latest version. intentionally. I'm testing for a huge (>600TB) production system running on RedHat, which currently ships with gluster 3.2. I'm currently on Raspbian with glusterfs v3.2.7.
22:16 JoeJulian Is this Red Hat Storage?
22:17 ^rooker No. regular RHEL with glusterfs from CentOS.
22:19 JoeJulian GlusterFS isn't in CentOS <6.5 and CentOS 6.5 ships with only the client from RHS which is 3.4.0.
22:19 JoeJulian Did you mean EPEL?
22:20 ^rooker Hm... I'm currently home and the system is in the office, but I thought it was 3.2. Maybe I just remember incorrectly.
22:21 ^rooker Are there known issues with distributed mode on v3.2?
22:22 JoeJulian I don't remember seeing anything like that back then, but I personally skipped 3.2 completely and went from 3.1 to 3.3.
22:22 JoeJulian ... and there are, indeed, lots of known issues with 3.2. Most of them should be non-fatal though.
22:23 ^rooker Is there anything I could have done wrong during creating the volume? Missing parameter, etc...
22:23 JoeJulian Not if you followed those instructions. The only way
22:24 JoeJulian ... I can think that could have happened is by adding the brick after the fact.
22:24 JoeJulian To "fix" it you could just do a rebalance.
22:26 ^rooker I've noticed reverse DNS problems with my current hostname. The link to changing the hostname is unfortunately broken in "http://gluster.org/community/documentation/index.php/Gluster_3.2:_Brick_Restoration_-_Replace_Crashed_Server"
22:31 ^rooker I've started rebalance...
22:35 JoeJulian That's odd... I thought I remembered someone recreating that page under the wiki. All it is, though, is using "gluster volume replace-brick old-dead-server:/brick/path new-server:/brick/path commit force"
22:35 ^rooker thanks :)
22:35 JoeJulian And reverse dns shouldn't matter.
22:37 ^rooker Strange: Rebalancing status says it has already rebalanced 19 files with around 300 MB, but there is no change in filesize when running "du -sh" on the individual bricks.
22:38 ^rooker but now both bricks have "trusted.glusterfs.dht" entries
22:39 mattappe_ joined #gluster
22:45 ^rooker Hm... something's wrong here with rebalancing: "du -s" shows that brick1's size decreases, increases again, decreases, and so on... Like it's copying in a circle.
22:48 ^rooker I've just found this in the logs: [dht-diskusage.c:211:dht_is_subvol_filled] 0-kitschdata-dht: disk space on subvolume 'kitschdata-client-1' is getting full (93.00 %), consider adding more nodes
22:49 ^rooker That's less than 10% free space, so the files that should go on brick2 are written to brick1 instead.
22:53 ^rooker I've changed volume option "cluster.min-free-disk" to 1%. Restarted rebalance and now it moves files to brick2! Yay. Thanks a lot!
22:54 JoeJulian cool, glad that worked for you.
22:54 ^rooker Also thank for pointing out to use v3.4. I'll check that.
22:56 ^rooker And yes: now the configured option "cluster.min-free-disk: 1" also shows up on volume info :)
23:14 theron joined #gluster
23:39 robo joined #gluster
23:45 DV joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary