Camelia, the Perl 6 bug

IRC log for #gluster, 2012-10-16

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:01 semiosis tc00per: so... what rotates your glusterfs logs now?
00:02 tc00per /etc/logrotate.d/glusterd
00:03 tc00per same file (md5sum checked) as provided by glusterfs-server rpm.
00:04 tc00per Not sure the name makes a whole lot of sense but it shouldn't be a problem.
00:07 semiosis oh cool
00:13 hagarth joined #gluster
00:14 redsolar_office joined #gluster
00:18 redsolar joined #gluster
00:24 aliguori joined #gluster
00:35 tc00per left #gluster
00:42 tc00per joined #gluster
00:42 plarsen joined #gluster
00:48 manik joined #gluster
00:58 lng joined #gluster
00:59 lng Hi! On 10G storage, rebalancing takes 14 hours already. Why is it so long?
00:59 hchiramm_ joined #gluster
01:00 JoeJulian lng: The size of the drive means nothing. How many files and directories?
01:00 lng JoeJulian: a lot of files
01:00 lng JoeJulian: ~3k per file
01:00 JoeJulian The it'll take correspondingly a "lot of" time.
01:01 semiosis df -i reports inode counts, that's a good way to get a quick ballpark figure of how many files+dirs there are, if i understand correctly
01:01 semiosis but yeah, rebal is "expensive"
01:01 semiosis imho avoiding rebal is a best practice :)
01:01 lng Inodes:419430400
01:02 lng semiosis: what is going to happen if bricks were added without rebalancing?
01:03 semiosis when i was evaluating glusterfs i set up a small scale test of my production dataset and based on how many *days* the rebalance took I decided to design so i would not have to it, hopefully ever
01:03 semiosis (at least, until rebalance got a lot better)
01:03 semiosis s/to it/to do it/
01:03 glusterbot What semiosis meant to say was: when i was evaluating glusterfs i set up a small scale test of my production dataset and based on how many *days* the rebalance took I decided to design so i would not have to do it, hopefully ever
01:04 semiosis lng: after add-brick, before any rebalance, files will not be placed on the bricks, making that space basically unusable
01:04 semiosis you can do a fix-layout rebal which will make that space available without incurring the full cost of a real rebalance, but that's not cheap either (though not as bad as a full rebalance)
01:05 lng bb soon - scrum meeting
01:05 semiosis i'm outta here, see you tmrw
01:06 Alpha64 joined #gluster
01:09 lng re
01:12 lng would new bricks be available for storing files if rebalance was interrupted?
01:14 redsolar_office joined #gluster
01:16 JoeJulian @yum repo
01:16 glusterbot JoeJulian: kkeithley's fedorapeople.org yum repository has 32- and 64-bit glusterfs 3.3 packages for RHEL/Fedora/Centos distributions: http://goo.gl/EyoCw
01:16 JoeJulian lng: perhaps.
01:16 JoeJulian Depends on how far along it got.
01:18 redsolar joined #gluster
01:18 lng JoeJulian: if I stop it and started again, would it go from scratch?
01:21 JoeJulian I'm mostly sure it would. I'm in the same boat as semiosis. I designed to avoid rebalancing so I don't have much experience with it.
01:21 redsolar_office joined #gluster
01:39 koaps joined #gluster
01:39 koaps hello
01:39 glusterbot koaps: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
01:42 redsolar joined #gluster
01:42 koaps I'm trying to use gluster for openstack live migrations, and trying to figure out why when I attempt to start an instance it only writes like 200K of the 30GB disk image, stops and guestmount fails. Is there something that would restrict the file size? I have over 4TB free on the mount point being used for gluster
01:43 kevein joined #gluster
02:12 berend joined #gluster
02:25 sunus joined #gluster
02:44 sgowda joined #gluster
02:45 hagarth joined #gluster
02:52 ika2810 joined #gluster
03:09 mooorten joined #gluster
03:17 kshlm joined #gluster
03:17 kshlm joined #gluster
03:28 hagarth joined #gluster
03:32 shylesh joined #gluster
03:37 _Marvel_ joined #gluster
04:06 ngoswami joined #gluster
04:14 sripathi joined #gluster
04:15 sunus hi is GLUSTERFS_DISABLE_MEM_ACCT ever exist?
04:15 vpshastry joined #gluster
04:15 sunus its a env var
04:37 Alpha64 joined #gluster
04:43 sripathi joined #gluster
04:44 benner joined #gluster
04:58 bala1 joined #gluster
05:00 kshlm joined #gluster
05:00 kshlm joined #gluster
05:10 faizan joined #gluster
05:17 raghu joined #gluster
05:18 sashko joined #gluster
05:23 ramkrsna joined #gluster
05:31 lng seams like Gluster is not easy to scale with such slow rebalancing feature
05:35 wushudoin joined #gluster
05:44 sripathi joined #gluster
05:47 guigui3 joined #gluster
05:59 hchiramm_ joined #gluster
05:59 glusterbot New news from resolvedglusterbugs: [Bug 764890] Keep code more readable and clean <https://bugzilla.redhat.com/show_bug.cgi?id=764890>
06:05 sripathi joined #gluster
06:13 mdarade joined #gluster
06:19 mtanner joined #gluster
06:20 vpshastry joined #gluster
06:23 Nr18 joined #gluster
06:24 badone_home joined #gluster
06:33 sunus hi is GLUSTERFS_DISABLE_MEM_ACCT ever exist?
06:34 sunus is a env var
06:34 samppah i haven't heard about that before
06:35 sunus it's in glusterfsd.c
06:37 samppah johnmark: thanks :)
06:45 ngoswami joined #gluster
06:46 mtanner joined #gluster
06:47 lkthomas joined #gluster
06:49 stickyboy joined #gluster
06:57 mtanner joined #gluster
06:58 puebele joined #gluster
07:07 sashko joined #gluster
07:07 mtanner joined #gluster
07:12 hagarth joined #gluster
07:17 puebele1 joined #gluster
07:17 michig joined #gluster
07:25 mtanner joined #gluster
07:28 Nr18 joined #gluster
07:31 andreask joined #gluster
07:33 Azrael808 joined #gluster
07:34 Azrael808 joined #gluster
07:34 tjikkun_work joined #gluster
07:41 oneiroi joined #gluster
07:45 mdarade left #gluster
07:55 saz_ joined #gluster
07:57 oneiroi joined #gluster
07:58 dobber joined #gluster
08:00 glusterbot New news from newglusterbugs: [Bug 859183] volume set gives wrong help question <https://bugzilla.redhat.com/show_bug.cgi?id=859183>
08:02 Triade joined #gluster
08:03 TheHaven joined #gluster
08:24 ankit9 joined #gluster
08:26 spn joined #gluster
08:29 lkoranda joined #gluster
08:30 mdarade1 joined #gluster
08:31 mdarade2 joined #gluster
08:47 ngoswami joined #gluster
08:47 dobber joined #gluster
08:48 ekuric joined #gluster
08:48 Gilbs joined #gluster
09:04 Azrael808 joined #gluster
09:17 rgustafs joined #gluster
09:17 badone_home joined #gluster
09:33 bulde1 joined #gluster
09:45 ngoswami joined #gluster
10:01 badone_home joined #gluster
10:03 bulde1 joined #gluster
10:18 sripathi joined #gluster
10:19 sgowda joined #gluster
10:25 badone_home joined #gluster
10:30 glusterbot New news from newglusterbugs: [Bug 866916] volume info displays information about a brick that has already been removed <https://bugzilla.redhat.com/show_bug.cgi?id=866916>
10:38 sripathi joined #gluster
10:38 sgowda joined #gluster
10:41 ankit9 joined #gluster
10:49 zArmon joined #gluster
10:51 StreamTom joined #gluster
10:52 StreamTom Hello there, I have a quick question, does gluster 3.2 have quorum or was that only in 3.3?
10:54 mdarade joined #gluster
10:54 bulde1 StreamTom: only 3.3
10:55 StreamTom bulde1: Thanks, I think I must've misread the documentation. Thanks for your time
10:55 StreamTom left #gluster
10:58 lng after adding bricks the user became root and https was not able to write to them. why??
10:58 lng when I created the volumes, I `chown www-data:www-data -R "$MNT"`
10:59 lng exactly the same part of shell script
10:59 lng which was used for storage creation
11:00 lng I was not able to chown mount point - root user retained
11:00 lng in result I removed these bricks
11:01 lng but the reason of permissions issue is not clear
11:01 lng as far as I know Gluster should inherit permissions from mount point
11:02 mdarade joined #gluster
11:03 dobber Is there a changelog for 3.3.1 ?
11:03 mdarade left #gluster
11:06 samppah dobber: commits for 3.3.1: http://www.gluster.org/community/doc​umentation/index.php/GlusterFS_3.3.1
11:07 bulde1 lng: http://review.gluster.org/3964 posted to fix the exact issue
11:07 glusterbot Title: Gerrit Code Review (at review.gluster.org)
11:08 bulde1 the issue is we don't 'heal' brick's '/' permission... so it gets converted back to root
11:16 lng is it bug?
11:17 lng how can I patch live servers?
11:19 kkeithley1 joined #gluster
11:20 ika2810 left #gluster
11:21 lng bulde1: but mount pont owner was set correctly before it was added to Gluster
11:21 badone_home joined #gluster
11:28 bulde1 lng: you are saying the bricks had proper permission before getting added?
11:28 lng yes
11:28 bulde1 or the gluster mount?
11:28 lng mount is fine
11:29 lng before adding permissions were correct
11:29 lng after adding them, user became root
11:29 lng I will try to reproduce it on testing env
11:30 bulde1 lng: let me check this behavior, was not aware of any such bugs... can you file a bug report with your gluster volume info; glusterfs --version; gluster volume status etc? also see if anything is prominent in logs
11:30 glusterbot https://bugzilla.redhat.com/en​ter_bug.cgi?product=GlusterFS
11:30 lng yes, I will
11:31 lng bulde1: after brick was removed from cluster, is it possible to bring it back to it?
11:33 edward1 joined #gluster
11:34 Azrael808 joined #gluster
11:42 ngoswami joined #gluster
11:47 mdarade1 joined #gluster
11:47 mdarade1 left #gluster
11:49 bulde1 joined #gluster
11:53 sripathi joined #gluster
12:03 ekuric left #gluster
12:04 ekuric joined #gluster
12:04 ankit9 joined #gluster
12:08 TheHaven joined #gluster
12:12 mdarade joined #gluster
12:20 sripathi joined #gluster
12:24 mdarade1 joined #gluster
12:30 aliguori joined #gluster
12:41 nocturn joined #gluster
12:47 plarsen joined #gluster
12:54 JoeJulian koaps: No, there's no restriction and I've been mounting a gluster volume at /var/lib/nova/instances and it's been working fine.
13:00 rwheeler joined #gluster
13:05 bennyturns joined #gluster
13:15 lh joined #gluster
13:15 lh joined #gluster
13:16 nocturn Hi all
13:16 nocturn We installed a gluster 3.2 cluster on debian squeeze.  But just found out that the filesystems where mounted without extended attributes.
13:17 JoeJulian What filesystem
13:17 nocturn We already wrote files to the volume,what's our best strategy for recovery
13:17 nocturn ext4
13:18 JoeJulian Unless you specifically did something to disable extended attributes, they were probably enabled. user_xattr is not on by default, but system xattrs are.
13:18 JoeJulian You can verify this by doing "getfattr -m . -d -e hex $somefile" where $somefile is some file on your brick.
13:21 nocturn OK, I get output
13:21 nocturn trusted.afr.clusterfs-client-​0=0x000000000000000000000000
13:21 nocturn trusted.afr.clusterfs-client-​1=0x000000000000000000000000
13:22 nocturn trusted.gfid=0xcb1cb9a49963404bb770ca34cdf56155
13:22 JoeJulian Cool, you're golden.
13:22 nocturn The problem is that files are not showing up in gluster, though they are on the bricks
13:22 JoeJulian Any clues in your client log?
13:23 nocturn no, nothing
13:24 JoeJulian ~pasteinfo | nocturn
13:24 glusterbot nocturn: Please paste the output of "gluster volume info" to http://fpaste.org or http://dpaste.org then paste the link that's generated here.
13:25 nocturn http://fpaste.org/FiGg/
13:25 glusterbot Title: Viewing Paste #243647 (at fpaste.org)
13:26 JoeJulian Also mount your client to a different directory then ls that mountpoint. Then fpaste that new log.
13:27 nocturn JoeJulian: do you mean my gluster fuse mountpoint?
13:27 JoeJulian yes
13:27 JoeJulian If you just mount the client again at like /mnt/foo, it'll create a new log /var/log/glusterfs/mnt-foo.log
13:27 nocturn on this cluster, we only mount it as NFS
13:28 JoeJulian What's your mount command?
13:28 nocturn For the ext4: /dev/pve/glfs /glfs ext4 defaults 0 1
13:29 nocturn NFS is done by proxmox
13:29 nocturn mount entry: localhost:/clusterfs on /mnt/pve/clusterfs type nfs (rw,vers=3,addr=127.0.0.1)
13:30 JoeJulian That works without specifying tcp?
13:31 nocturn yes
13:32 nocturn We just passed version 3 as an option in the proxmox gui
13:32 JoeJulian I hate looking at nfs.log, but how about pasting the last 100 lines or so?
13:34 nocturn no entries  in it...
13:35 nocturn there are older rotated files from oct 10, but the files where written there today
13:35 Nr18 joined #gluster
13:36 JoeJulian Ok, let's start by making sure it's not something more nefarious. Can you do a fuse mount and ensure that the files list correctly?
13:43 nocturn done, but the file we wrote is missing there too
13:44 JoeJulian badone_: The native client uses privileged ports because they're privileged. That and the weak auth lists are as close as GlusterFS currently gets to ensuring that the client is legit.
13:44 JoeJulian nocturn: Did you write to the brick, or through the mountpoint?
13:45 nocturn to the mountpoint
13:45 JoeJulian But it's just one file that's not showing up?
13:45 JoeJulian Can you stat that file by name?
13:46 nocturn we copied two files (ISO images)
13:46 nocturn one shows up
13:46 nocturn one does not
13:46 nocturn the largest one shows
13:46 JoeJulian @meh
13:46 glusterbot JoeJulian: I'm not happy about it either
13:46 JoeJulian Ok, stat the missing file by name at the fuse mount and check the client log.
13:47 pithagorians joined #gluster
13:48 ndevos JoeJulian: so, glusterd and glusterfsd contain a check on the source-port-number? (badone pinged me about the topic too)
13:48 pithagorians hi all. i have a problem with glusterfs. when i ls -al in the mounted partition the listing takes some minutes. what is the issue? i'm new in field.
13:48 duerF joined #gluster
13:49 nocturn stat /mnt/tmp/data/template/iso/windows2008.iso
13:49 nocturn stat: cannot stat `/mnt/tmp/data/template/iso/windows2008.iso': No such file or directory
13:50 nocturn it now shows in the nfs mount though
13:50 nocturn stat /mnt/pve/clusterfs/template/iso/windows2008.iso
13:50 nocturn File: `/mnt/pve/clusterfs/template/iso/windows2008.iso'
13:50 JoeJulian ndevos: yes, there's an option "server.allow-insecure" to bypass it.
13:51 JoeJulian nocturn: I expect the client log will show some self-heal.
13:51 JoeJulian Not sure how it got in that state though.
13:51 ndevos JoeJulian: ah, okay, any idea how clients can be instructed to use a non-priviledged port?
13:52 JoeJulian looks like there's a command line option, "client-bind-insecure"
13:52 JoeJulian pithagorians: Large numbers of directory entries in that directory?
13:53 ndevos yeah, seen that, but I cant get it to work :-/
13:55 JoeJulian ndevos: Can't get it to work in what way? The mount script rejects it, or it doesn't actually use unprivileged ports?
13:56 nocturn JoeJulian: nothing http://pastie.org/5067859
13:56 glusterbot Title: #5067859 - Pastie (at pastie.org)
13:56 * JoeJulian throws up his hands in disgust.
13:57 Alpinist joined #gluster
13:57 nocturn JoeJulian: I can stat the file over nfs by full path, but if I do ls in th directory, it does not show
13:58 JoeJulian Does ls on the fuse path show?
13:58 JoeJulian Could be the kernel fscache, in which case unmounting and mounting should fix that.
13:59 pithagorians <JoeJulian> 34623 files and one folder with other 16992 entries
13:59 stopbit joined #gluster
14:00 nocturn JoeJulian: ls does not show it on either mount point
14:00 Nr18 joined #gluster
14:00 JoeJulian pithagorians: Yeah, that's going to be slow. ls checks the file stats in building it's listing. Each file lookup() causes a self-heal check. Doing something like "echo *" would work much faster.
14:01 JoeJulian nocturn: What version is this?
14:01 nocturn JoeJulian: glusterfs 3.2.7
14:03 JoeJulian nocturn: This is way to early in the morning for these kinds of brainteasers. ;)
14:03 nocturn JoeJulian: yeah, luckily it's almost close of businessday here :-)
14:04 JoeJulian nocturn: Have you ever done a rebalance?
14:04 nocturn JoeJulian: no
14:06 wN joined #gluster
14:07 JoeJulian fpaste "getfattr -m . -d -e hex $target" from all the servers where $target is the directory on the bricks with that file. Also that same getfattr for that file on the bricks where it exists.
14:12 ndevos JoeJulian: the mount script does not like it, starting glusterfs by hand works, but it's always ports < 1024 for me :-/
14:12 nocturn JoeJulian: dirs exist
14:13 ndevos (well, thats with --xlator-option=transport.client-bind-insecure=1 and variants on 'transport')
14:13 nocturn fpasing output
14:15 Alpinist JoeJulian: the x_attr on the bricks: http://fpaste.org/BVu6/
14:15 glusterbot Title: Viewing Paste #243671 (at fpaste.org)
14:16 JoeJulian Hmm... 5 and 6 don't have dht masks.
14:17 JoeJulian Were they added after the initial volume creation?
14:18 Alpinist Yes, they were added after the creation of the volume
14:19 JoeJulian Interesting. I would almost bet that the file that doesn't show up exists on the 5,6 pair.
14:20 JoeJulian Ooh, and there wasn't room on the other bricks to add that iso?
14:20 JoeJulian That's an interesting bug if that's true.
14:21 JoeJulian The solution is simple to do a rebalance. At minimum you need to do a "rebalance ... fix-layout".
14:21 JoeJulian s/simple/simply/
14:21 glusterbot What JoeJulian meant to say was: The solution is simply to do a rebalance. At minimum you need to do a "rebalance ... fix-layout".
14:23 Alpinist Ahaa JoeJulian While check node05 and 6 i found out we've got a problem on node02 root@node02:~# ls /mnt/pve/clusterfs/template/iso/
14:23 Alpinist ls: reading directory /mnt/pve/clusterfs/template/iso/: Too many levels of symbolic links
14:29 Alpinist JoeJulian, i'm not able to start rebalancing: root@node01:~# gluster volume rebalance clusterfs status           rebalance not started
14:30 TheHaven joined #gluster
14:31 Alpinist my mistake: it's rebalance start offcourse
14:31 nocturn left #gluster
14:32 JoeJulian Cool. Ok, I've gotta run. Need to get changed out of my pj's and get headed in to the office.
14:32 nocturn joined #gluster
14:33 Alpinist root@node05:~# gluster volume rebalance clusterfs status
14:33 Alpinist rebalance step 1: layout fix in progress: fixed layout 11
14:34 Alpinist JoeJulian, thanks for your help, have a nice day at the office
14:34 dblack joined #gluster
14:41 johnmark hey gang - download.gluster.org is back
14:41 johnmark right now, there's only 3.3.1
14:41 johnmark for which there will be a release announcement later today
14:41 semiosis :O
14:42 johnmark :O
14:49 Technicool joined #gluster
14:57 daMaestro joined #gluster
14:58 kkeithley_wfh I don't think the dns ttl has expired yet. trying to open download.gluster.com (both through work vpn and home direct internet) didn't connect to the server.
14:58 kkeithley_wfh traceroute to download.gluster.com (from home) shows it trying to get to iweb, not rackspace.
14:58 kkeithley_wfh johnmark: ^^^
14:59 johnmark kkeithley_wfh: download.gluster.org
14:59 johnmark we're phasing out all usage of .com
14:59 NuxRo johnmark: changelog for 3.3.1 ?
14:59 johnmark NuxRo: see http://www.gluster.org/community/doc​umentation/index.php/GlusterFS_3.3.1
14:59 johnmark I need help paring that down
15:00 semiosis also still need a git tag for 3.3.1 afaict
15:00 semiosis hagarth_: ^^ ?
15:03 NuxRo thanks
15:03 hagarth joined #gluster
15:04 pithagorians <JoeJulian> for attached by network partition - time echo * (real    20m33.828s
15:04 pithagorians user    0m0.272s
15:04 pithagorians sys     0m0.040s
15:04 pithagorians )
15:04 y4m4 joined #gluster
15:04 JoeJulian Well, there goes my train ride into town: http://twitpic.com/b4pjpz
15:06 hagarth_ joined #gluster
15:07 guigui3 joined #gluster
15:09 samppah @options
15:09 glusterbot samppah: http://gluster.org/community/documentation/i​ndex.php/Gluster_3.2:_Setting_Volume_Options
15:10 Psi-Jack joined #gluster
15:12 samppah hmm.. is it possible to set time how often georeplication syncs data?
15:17 shylesh joined #gluster
15:18 mdarade1 left #gluster
15:20 TheHaven joined #gluster
15:22 Alpinist how long does a rebalance take?
15:23 akadaedalus joined #gluster
15:25 sashko joined #gluster
15:29 chrizz- left #gluster
15:30 nocturn left #gluster
15:31 bala1 joined #gluster
15:37 _Marvel_ joined #gluster
15:38 shylesh joined #gluster
15:46 bfoster joined #gluster
15:47 shylesh joined #gluster
15:51 JoeJulian samppah: I can't find any settings with a cursory look.
15:51 JoeJulian Alpinist: How long is a string?
15:52 * JoeJulian should start answering every question with a Zen question.
15:53 Gilbs left #gluster
15:58 neofob joined #gluster
16:00 seanh-ansca joined #gluster
16:02 rwheeler joined #gluster
16:05 Teknix joined #gluster
16:09 Bullardo joined #gluster
16:09 saz_ joined #gluster
16:10 Nr18 joined #gluster
16:20 kkeithley_wfh @repos
16:20 glusterbot kkeithley_wfh: I do not know about 'repos', but I do know about these similar topics: 'repository', 'yum repository'
16:20 kkeithley_wfh @repository
16:20 glusterbot kkeithley_wfh: git clone https://github.com/gluster/glusterfs.git
16:21 kkeithley_wfh @ppas
16:21 glusterbot kkeithley_wfh: I do not know about 'ppas', but I do know about these similar topics: 'ppa'
16:21 kkeithley_wfh @ppa
16:21 glusterbot kkeithley_wfh: semiosis' Launchpad PPAs have 32 & 64-bit binary packages of the latest Glusterfs for Ubuntu Lucid - Precise. http://goo.gl/DzomL (for 3.1.x) and http://goo.gl/TNN6N (for 3.2.x) and http://goo.gl/TISkP (for 3.3.x). See also @upstart
16:38 Bullardo joined #gluster
16:45 manik joined #gluster
16:52 vimal joined #gluster
16:53 ika2810 joined #gluster
16:55 Nr18 joined #gluster
16:55 koaps hi JoeJulian, do you use SAN along with gluster for openstack instances?
17:01 blendedbychris joined #gluster
17:01 blendedbychris joined #gluster
17:06 Alpinist JoeJulian, we've only got 20 GB in use most in file larger that a gigabyte
17:06 zoldar I don't want to claim that I've found a bug, but a following has bitten me when setting up geo-replication: When configuring it with mountbroker, I've been following instructions from Admin Guide - I've chosen option with command parameter in authorized_keys. The issue was signaled by "gsyncd sibling not found" in logs. After recompiling gsyncd wrapper with a bit more debugging messages, the following problem came out: The wrapper checks dire
17:09 raghu joined #gluster
17:11 neofob i occasionally see files with 000 and a letter 'T' when i do ls -l, some sort of glusterfs bug? i can chmod them back to 644 as i want but this is annoying
17:11 neofob 000 permission
17:11 semiosis neofob: are you seeing that on the backend bricks or through the glusterfs client mount point?
17:12 semiosis neofob: on the bricks, those are link files, they have ,,(extended attributes) which point to another path usually on another brick
17:12 glusterbot neofob: (#1) To read the extended attributes on the server: getfattr -m .  -d -e hex {filename}, or (#2) For more information on how GlusterFS uses extended attributes, see this article: http://hekafs.org/index.php/2011/​04/glusterfs-extended-attributes/
17:13 semiosis if you're seeing them through the client, idk what that means
17:16 neofob semiosis: i see that through gluster client
17:17 neofob so sometimes i can't cp a file; i do a ls -l and its permission is something like ''-----T"
17:18 neofob my healing process crashed one time before, and i had to reboot the servers; could it be the problem?
17:19 semiosis neofob: check client log file, you can pastie.org it if you want, maybe that would help diagnosis
17:21 neofob semiosis: thanks for the tip.
17:28 zoldar anybody else slipped on the issue I've mentioned? Maybe it's me doing something wrong but I've reread the guide several times and didn't find any hint/warning about such problem. Maybe command/authorized_keys behavior is os specific (I use Debian).
17:35 rwheeler joined #gluster
17:35 Alpha64 joined #gluster
17:35 Teknix joined #gluster
17:36 faizan joined #gluster
17:39 Nr18 joined #gluster
17:43 hagarth joined #gluster
17:44 andreask joined #gluster
17:45 clopez joined #gluster
17:45 clopez hi
17:45 glusterbot clopez: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
17:47 clopez i have a question about glusterfs distributed volumes (not stripped) ....  if one or more of the server that are part of the gluster distributed volume fails ... what happens to the volume? the files that are stored on the servers still online can be accessed or everything breaks?
17:51 blendedbychris joined #gluster
17:51 blendedbychris joined #gluster
17:52 semiosis clopez: last time i tried that the portion of files on the missing server disappeared from the volume.  the surviving files could be read ok.
17:53 semiosis writing to those surviving files worked OK, but writing to new files failed randomly... if the file were to be placed on a surviving brick, it would succeed, if it were to be placed on the missing brick, it would fail
17:53 semiosis there may have been other issues i'm forgetting, and it may behave differently now, it's been a while
17:53 semiosis if you need the volume to survive loss of a brick, use replication
17:53 semiosis distributed-replicated is probably the most common type of volume
17:54 semiosis bbiab
17:54 kaptk2 joined #gluster
18:07 faizan joined #gluster
18:08 clopez i see
18:08 clopez thanks for the info :)
18:11 _pol joined #gluster
18:11 _pol any gluster.org admins around?
18:12 kkeithley_wfh _pol: for?
18:12 _pol kkeithley_wfh: some of the downloads appear to be missing
18:12 _pol kkeithley_wfh: for example, there is nothing in: http://download.gluster.org​/pub/gluster/glusterfs/3.2/
18:12 kkeithley_wfh A lot of downloads are missing.
18:12 glusterbot Title: Index of /pub/gluster/glusterfs/3.2 (at download.gluster.org)
18:13 _pol kkeithley_wfh: any news about that?
18:13 Technicool _pol, you can get packages from bits.gluster.com/pub/ for now
18:14 robo joined #gluster
18:14 _pol Technicool: thanks
18:18 johnmark _pol: yeah, all of the downloads, except for 3.3.1, are missing
18:19 johnmark we will be filling in other builds gradually. which do you need i nparticular?
18:19 jdarcy joined #gluster
18:19 _pol So, it looks like bits.gluster only has rpms, are there debs anywhere?
18:20 _pol I am looking for: /pub/gluster/glusterfs/3.2/3.2.5/Ubun​tu/11.10/glusterfs_3.2.5-1_amd64.deb
18:20 aliguori joined #gluster
18:21 _pol johnmark: any ideas?
18:27 johnmark _pol: for Ubuntu? or Debian?
18:28 _pol Ubuntu (it would have been at that URL)
18:28 johnmark _pol: oh duh heh
18:28 _pol :D
18:28 johnmark _pol: our official Ubuntu builds now reside on https://launchpad.net/~semiosis
18:28 glusterbot Title: semiosis in Launchpad (at launchpad.net)
18:28 johnmark you'll find builds there for GlusterFS 3.1, 3.2 and 3.3.x
18:29 _pol So... is this how it will be from now on, or is this just temporary?
18:30 _pol Should I update my Chef recipes to use that semiosis repo for debs?
18:34 stickyboy joined #gluster
18:38 johnmark _pol: semiosis is now our official Ubuntu maintainer. if we offer his builds from gluster.org, it will be the same thing either way
18:39 johnmark _pol: if you ask nicely, you might be able to convince us to mirror his repo on gluster.org
18:42 sshaaf joined #gluster
18:43 _pol It would seem a sensible thing to keep the packages in one place, I would very much appreciate it if the ubuntu debs were hosted on gluster.org.
18:44 Bullardo joined #gluster
18:49 johnmark _pol: ok. we'll work out a way to mirror them
18:54 Nr18 joined #gluster
18:55 _pol thanks johnmark
19:11 nhm joined #gluster
19:26 Teknix joined #gluster
19:36 * semiosis is working on it
19:36 semiosis thx for the feedback _pol
19:37 semiosis some people, when faced with overwhelming demands from their full time job, think I know, I'll contribute to open source in my free time... now they have two full time jobs :)
19:39 johnmark semiosis: doh :)
19:39 semiosis not that i'm overwhelmed exactly, but certainly busy :)
19:40 johnmark semiosis: don't be afraid to ask me for help
19:40 semiosis johnmark: thank you, i will
19:40 johnmark semiosis: last thing I want is a burnt out semiosis
19:41 semiosis that makes two of us
19:42 johnmark :)
19:43 koaps anyone using openstack folsom with glusterfs?
19:43 koaps have a weird issue
19:48 dblack joined #gluster
19:52 nhm semiosis: it's an escape mechanism I think. ;)
19:52 ankit9 joined #gluster
19:53 semiosis found it... https://twitter.com/holdenweb​/statuses/195308761936707585
19:53 glusterbot Title: Twitter / holdenweb: RT @tomdale: Some people, wanting ... (at twitter.com)
19:53 kkeithley_wfh semiosis: on a debian wheezy box I started with your glusterfs_3.3.0-ppa1~precise3.debian.tar.gz, untarred it. (now it's in ~/src/debian).
19:54 semiosis kkeithley_wfh: ?
19:54 kkeithley_wfh I cd to ~/src/debian and do a `debuild` and get: ... this directory name does not match the package name according to the
19:54 kkeithley_wfh regex  PACKAGE(-.+)?.
19:54 semiosis btw i will have to go into a meeting in 5 min
19:54 kkeithley_wfh salright
19:54 semiosis github has been pretty slow for me this week
19:54 semiosis kkeithley_wfh: https://github.com/semiosis/glusterfs-debian
19:54 glusterbot Title: semiosis/glusterfs-debian · GitHub (at github.com)
19:55 semiosis the readme has some info on how to get it going
19:55 kkeithley_wfh I'll look at that. Thanks
19:55 semiosis basically that tgz you have is just the debian/ folder, which should be placed in the root of the untarred glusterfs source tree
19:55 semiosis also, above the source tree, you'll need the original tarball (or a symlink to it) named .orig.tar.gz -- debuild will inform you with an error if that's missing
19:56 semiosis kkeithley_wfh: also, use this: https://launchpad.net/~semiosis​/+archive/ubuntu-glusterfs-3.3
19:56 glusterbot Title: ubuntu-glusterfs-3.3 : semiosis (at launchpad.net)
19:56 kkeithley_wfh I've always used slackware, SuSE, and RHEL/Fedora, new to Debian/Ubuntu
19:56 kkeithley_wfh yup
19:56 semiosis i started that one yesterday and it has the latest which conforms to the structure used in the official debina/ubuntu repos
19:57 semiosis my previous ppas had a different structure (s/glusterfs-server/glusterd/)
19:57 kkeithley_wfh yup, I'm working from https://launchpad.net/~semi​osis/+archive/glusterfs-3.3
19:57 glusterbot Title: glusterfs-3.3 : semiosis (at launchpad.net)
19:57 semiosis are you working on a debian or an ubuntu build?
19:57 kkeithley_wfh oh, oops
19:57 kkeithley_wfh debian for now
19:58 kkeithley_wfh ubuntu later, after I get UFO into the build on debian
19:58 semiosis kkeithley_wfh: ok just grab the debian/ folder from that github project, probably best place for you to start
19:58 semiosis the ppas need some love :)
19:58 semiosis and i'll be updating them throughout the week
19:58 kkeithley_wfh one bridge at a time
19:58 semiosis ok gtg, bbiab
19:59 kkeithley_wfh still need to get hadoop into the Fedora/RHEL packages too
20:09 blendedbychris joined #gluster
20:09 blendedbychris joined #gluster
20:09 Daxxial_ joined #gluster
20:12 blendedbychris joined #gluster
20:12 blendedbychris joined #gluster
20:14 kaptk2 what are the current best practices for running KVM VM's on Gluster?
20:15 kaptk2 The documentation seems to recommend using the FUSE driver, is that still the best? I seemed remember something about a better implementation mentioned on the mailing list a while back.
20:16 ctria joined #gluster
20:17 blendedbychris joined #gluster
20:17 blendedbychris joined #gluster
20:32 glusterbot New news from newglusterbugs: [Bug 867132] Add STACK_WIND_TAIL for default functions <https://bugzilla.redhat.com/show_bug.cgi?id=867132>
20:55 badone_ JoeJulian: thanks
21:16 plarsen joined #gluster
21:29 wushudoin joined #gluster
21:31 spn joined #gluster
21:32 hattenator joined #gluster
21:33 glusterbot New news from newglusterbugs: [Bug 866557] Some error messages logged should probably be warnings <https://bugzilla.redhat.com/show_bug.cgi?id=866557>
21:46 sashko_ joined #gluster
21:48 m0zes joined #gluster
22:09 Bullardo joined #gluster
22:15 duerF joined #gluster
23:06 daMaestro joined #gluster
23:10 Alpha64 joined #gluster
23:33 tryggvil joined #gluster
23:52 clopez joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary