Camelia, the Perl 6 bug

IRC log for #gluster, 2013-07-11

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:00 tjstansell something similar to 'zfs get output' would be awesome...
00:00 tjstansell i'll have to see if there's a bug already asking for this...
00:00 JoeJulian You should. That's always bugged me a little.
00:06 tjstansell hm... well, i can't find one... so i'll just open a new bug.
00:45 glusterbot New news from newglusterbugs: [Bug 983317] add 'get' option to view all volume options <http://goo.gl/GLmpA>
01:21 klaxa joined #gluster
01:23 bala joined #gluster
01:24 harish joined #gluster
01:35 raghug joined #gluster
01:39 kevein joined #gluster
01:43 badone joined #gluster
01:58 mibby| joined #gluster
02:02 mibby- joined #gluster
02:02 harish joined #gluster
02:06 mibby- Hi, I'm fairly new to gluster and am reading some conflicting recommendation on filesystem format between XFS and EXT4. What's the current recommendation? My use case is for a shared fileserver running in AWS using distributed replicated volumes with 2TB (initially at least) of space.
02:11 semiosis xfs with inode size 512
02:19 mibby- thanks semiosis
02:22 semiosis yw
02:23 semiosis fwiw i use glusterfs in ec2 as well, it's been great for me
02:24 mibby- has it been all smooth sailing? any gotcha's?
02:25 semiosis pretty much smooth sailing.  thanks to gluster we've survived a majority of the ec2 "incidents" that we've encountered so far
02:25 semiosis my recommendations are...
02:25 semiosis don't use lvm* just plain ebs disks as bricks (with xfs as mentioned)
02:26 semiosis don't use elastic ips, just make real FQDN hostnames for your servers (gluster1.your.net, etc) and CNAME those to the current public-hostname of your ec2 instances
02:26 semiosis use the CreateImage API call to snapshot entire servers
02:27 semiosis makes backup/restore a snap
02:27 semiosis if you lose a server, follow the second ,,(replace) link...
02:27 glusterbot Useful links for replacing a failed server... if replacement server has different hostname: http://goo.gl/4hWXJ ... or if replacement server has same hostname:
02:27 glusterbot http://goo.gl/rem8L
02:27 semiosis just remap the CNAME for that server to the new instance
02:28 semiosis and another thing, use more smaller ebs vols, so when you need to add capacity you can just expand the ebs vols, instead of adding more bricks
02:28 semiosis adding bricks (the rebalancing needed) is an expensive operation, and with a little planning ahead you can avoid it for longer
02:29 mibby- how many gluster servers and total storage do you have running there?
02:29 semiosis for example, starting out with Nx 256GB ebs vols you can grow your ebs vols by 4x before you have to add more bricks or start striping ebs together with lvm
02:30 semiosis terabytes
02:30 semiosis nothing huge, but big enough to be interesting :)
02:30 mibby- so if I needed a min of 2TB usable to start with, what would you recommend?
02:31 semiosis how much replication can you afford?  3-way is nice because you can use quorum and still have read-write even when one server (i.e. a whole AZ) goes up in smoke
02:32 semiosis i'd say 8 bricks per server, 256GB EBS vol per brick, replicated between 2 or 3 AZs depending on your needs & budget
02:32 mibby- not really a problem at this stage. Availability is more imortant at this stage
02:32 semiosis that assumes your files are significantly smaller than 256GB
02:32 mibby- yeah much smaller, most will be photos
02:33 mibby- i was initially thinking just 2 servers across both SYD AZ's
02:33 semiosis ebs is a tricky thing, so more bricks provides resilience against loss of a brick or latency spikes
02:33 _pol joined #gluster
02:34 mibby- how difficult is it to add more bricks?
02:34 _pol joined #gluster
02:34 semiosis after you add bricks you need to rebalance the data, which is slow, and if anything goes wrong during you have to do some work to get things back in order
02:34 semiosis so i like to avoid it by design
02:35 semiosis capacity planning
02:35 semiosis try rebalancing with a 1/10th test dataset
02:35 semiosis see how that goes
02:35 semiosis in fact, try everything on a scaled down dataset
02:36 semiosis killing & replacing servers, network partitions, etc
02:36 mibby- yeah I've started to build a test environment in there already
02:36 Nuxr0 joined #gluster
02:37 ninkotech__ joined #gluster
02:37 penglish3 joined #gluster
02:37 morse joined #gluster
02:38 mibby- Is it a bad idea to use something like HAProxy to load balance incoming CIFS connections between gluster servers for availability in case one of the servers goes down?
02:38 SteveCooling joined #gluster
02:38 js__ joined #gluster
02:38 ingard_ joined #gluster
02:39 NeonLich1 joined #gluster
02:39 jcastle_ joined #gluster
02:39 semiosis idk much about haproxy and i try to forget what i know about cifs
02:40 semiosis but i can tell you this... the glusterfs native fuse client does HA automagically
02:40 mibby- haha - the main consumer of data will be Windows servers so need to use CIFS
02:40 semiosis so you could put a fuse client with a cifs server (samba?) near your cifs clients
02:40 mibby- so can't use the FUSE client unfortunately
02:40 mibby- Hmm.... didn't think of that
02:40 semiosis well you'll need that cifs server pointed at either a fuse client (recommended) or an nfs client
02:41 semiosis glusterfs doesnt do cifs itself (last time i checked, if it does, thats news to me)
02:41 mibby- no I don't believe it does either - would need to install samba, etc
02:42 yosafbridge` joined #gluster
02:43 semiosis another option may be mounting directly from a windows nfs client to the gluster nfs server
02:43 semiosis no idea how viable that is, but may be worth a try
02:43 semiosis i think windows has an nfs client
02:43 mibby- yeah i believe it does
02:44 semiosis nfs connections to glusterfs arent HA though
02:44 georgeh|workstat joined #gluster
02:44 semiosis but you could try both, direct nfs, and samba via fuse, see which you like better
02:46 mibby- Im trying to make the architecture as stateless as possible, so that if anything breaks or stops the site doesnt go down. Pity there's no a Windows gluster lient :(
02:48 semiosis a linux vm in windows running samba & fuse client
02:48 semiosis :D
02:49 aliguori_ joined #gluster
02:50 rkeene joined #gluster
02:51 ramkrsna joined #gluster
02:52 mibby- is your configuration distributed replicated volumes?
02:53 semiosis yes
02:54 semiosis that's by far the most common type of volume
02:54 semiosis people rarely go without replication
02:59 lalatenduM joined #gluster
03:09 sonne joined #gluster
03:09 bivak joined #gluster
03:18 mibby- thanks for your help semiosis
03:18 semiosis yw, good luck
03:19 mibby- thanks
03:29 raghug joined #gluster
03:43 chirino joined #gluster
03:55 fidevo joined #gluster
04:13 itisravi joined #gluster
04:20 anand joined #gluster
04:20 shylesh joined #gluster
04:27 mohankumar joined #gluster
04:32 bulde joined #gluster
04:38 hagarth joined #gluster
04:39 mibby- joined #gluster
04:41 mibby- joined #gluster
04:51 T0aD joined #gluster
04:55 vpshastry joined #gluster
04:56 raghu joined #gluster
04:57 sgowda joined #gluster
05:04 raghug joined #gluster
05:05 deepakcs joined #gluster
05:22 puebele1 joined #gluster
05:22 psharma joined #gluster
05:23 puebele3 joined #gluster
05:24 bala joined #gluster
05:37 CheRi joined #gluster
05:37 mjrosenb if I have two files, a and b, and they're hardlinked, it doesn't matter which one is 'the original' and which one was created as a link, right?
05:38 JoeJulian right
05:38 JoeJulian They're all names that point to the same inode
05:38 mjrosenb JoeJulian: ah, I had a question yesterday that you can likely answer.
05:39 mjrosenb JoeJulian: what is in my .gluster directory if I don't have replication turned on?
05:39 mjrosenb also, what bad things will happen if the hard links cannot be created?
05:39 JoeJulian When you create hardlinks, the link count is incremented for that inode entry. When you delete that name entry, the inode link count is decremented. That inode is still in use until the link count reaches 0.
05:40 mjrosenb ugh. this is going to be very un-fun
05:40 JoeJulian good question... I haven't even looked to see if it's created without replication.
05:41 mjrosenb it certainly seems to be for me
05:41 JoeJulian That directory structure allows hardlinks from the clients, allows offline deletes...
05:42 JoeJulian offline renames...
05:42 samppah hmmh.. if i'm taking backups from backend filesystems, should i backup everything from .glusterfs directory aswell?
05:42 JoeJulian As long as you're backup system accounts for hardlinks, it wouldn't add any expense to your backup.
05:43 JoeJulian s/any/any significant/
05:43 glusterbot What JoeJulian meant to say was: As long as you're backup system accounts for hardlinks, it wouldn't add any significant expense to your backup.
05:45 mjrosenb JoeJulian: so i'm trying to migrate the backing store of one of my bricks using rsync
05:45 JoeJulian -H
05:46 mjrosenb but i also wanted to use zfs's subvolume feature to make all of the top level directories their own volumes to make management easier
05:46 mjrosenb but this has failed *misreably* because it copied a bunch of stuff to .gluster
05:46 JoeJulian Ewww.... I see where you're going...
05:46 mjrosenb then fell over since it couldn't create hardlinks to the actual file locations
05:48 JoeJulian Just delete the .glusterfs directories (there should be one in each of these new brick roots) and heal...full each new volume. You should be okay.
05:48 san joined #gluster
05:49 Guest45674 any one can help with error : [2013-07-11 05:48:52.133642] W [rdma.c:1079:gf_rdma_cm_event_handler] 0-gbits-client-3: cma event RDMA_CM_EVENT_ADDR_ERROR, error -19 (me: peer:)
05:49 mjrosenb delete .glusterfs before I rsync?
05:49 JoeJulian mjrosenb: sure
05:49 JoeJulian Guest45674: That claims it's not an error...
05:50 JoeJulian Does something not work?
05:50 Guest45674 I just created rdma based distributed replicated volume on 3.4beta4
05:50 vpshastry joined #gluster
05:50 Guest45674 I am trying to mount using fstab but it is stuck
05:50 Guest45674 the log file has this W
05:51 JoeJulian The " W ", which is a warning,  suggests that it's a recoverable issue.
05:51 Guest45674 I agree but how to mount the gluster volume using gluste native client and rdma
05:53 JoeJulian mjrosenb: You're more IB knowledgeable. What IB specific info should he include in his bug report?
05:53 mjrosenb IB?
05:53 JoeJulian Maybe not... don't you use infiniband?
05:54 JoeJulian I've probably just lost my mind again...
05:54 mjrosenb JoeJulian: nome, no infiniband here.
05:54 mjrosenb good ol' sas + backplane.
05:54 JoeJulian It's all this crappy php code I've been ripping apart for a week. It's melting my brain.
05:56 satheesh joined #gluster
05:58 sank I think there is no way of mounting volume other than using IP address or host name.
06:00 JoeJulian Address resolution (rdma_resolve_addr) failed
06:01 sank where can I find detail errors ?
06:01 JoeJulian dmesg?
06:02 JoeJulian Sorry, I don't have IB hardware to play with.
06:03 sank last there lines of dmesg :
06:03 sank [48336.034612] init: statd main process (6084) killed by KILL signal [48336.034646] init: statd main process ended, respawning [48847.541211] systemd-hostnamed[8050]: Warning: nss-myhostname is not installed. Changing the local hostname might make it unresolveable. Please install nss-myhostname! a
06:05 JoeJulian Are your ethernet and infiniband on the same ip subnet?
06:06 sank My ethernet is 192.168.2.0 network and using rdma and not IPoIB so no subnet for infiniband
06:07 sank I used ethernet IPs to create rdma volume and passed argument transport rdma while creating volume.
06:07 JoeJulian ok
06:08 sank while mouting I am using RRDNS (ethernet IPs) - How do I supply transport rdma while mouting volume
06:08 sank in /etc/fstab
06:09 sank mount command : sudo mount -t glusterfs ca-gfs.testing.int:/gbits -o transport=rdma /mnt/gbits
06:09 JoeJulian mycluster:myvolume.rdma /mnt/foo glusterfs ... etc
06:09 JoeJulian Should be able to  mount -t glusterfs ca-gfs.testing.int:/gbits.rdma /mnt/gbits
06:11 sank doesnt work -same error
06:13 JoeJulian That error's filtering up from the ibverbs library. I'm right now trying to figure out how the connection manager abstraction layer works.
06:13 JoeJulian ... since that's where the error is.
06:16 sank ok, thanks, I am using mellanox http://www.mellanox.com/downloads/​ofed/MLNX_OFED-2.0/MLNX_OFED_LINUX​-2.0-2.0.5-ubuntu12.04-x86_64.tgz
06:16 glusterbot <http://goo.gl/0ishJ> (at www.mellanox.com)
06:16 bala joined #gluster
06:18 sank JoeJulian : please let me know if you stop investigating.
06:18 rastar joined #gluster
06:20 JoeJulian I think you must configure IPoIB on those interfaces in order for them to exchange the memory region key. I think without IP on those interfaces - this from some very cursory skimming of this article: http://thegeekinthecorner.wordpress.com/201​0/09/28/rdma-read-and-write-with-ib-verbs/
06:20 glusterbot <http://goo.gl/9MtF2> (at thegeekinthecorner.wordpress.com)
06:20 JoeJulian s/ I think without IP on those interfaces//
06:20 glusterbot What JoeJulian meant to say was: I think you must configure IPoIB on those interfaces in order for them to exchange the memory region key. - this from some very cursory skimming of this article: http://goo.gl/9MtF2
06:21 JoeJulian I could be wrong though, but that's something simple to try.
06:22 sank sure, I will try and update. If I do IPoIB, I can completely skip ethernet IPs 1. creating volume 2. mounting volumes.
06:22 sank sure, I will try and update. If I do IPoIB, I can completely skip ethernet IPs 1. creating volume 2. mounting volume.
06:22 jtux joined #gluster
06:25 sank Does it mean gluster lacks support for rdma ?
06:27 JoeJulian sank: No, but according to what I was getting from that article, there's out-of-band configuration data that has to be passed before the rdma connection is established.
06:28 bulde sank: currently it means glusterfs lacks bug fixes in rdma
06:28 ngoswami joined #gluster
06:28 JoeJulian ^^ or what he said...
06:29 JoeJulian bulde: So it is possible for that oob memory region key transfer to happen over a separate ethernet connection if the bugs were fixed?
06:29 bulde sank: soon it would be fixed... one of the main reason for not able to work on that is the developer who worked on that particular transport lacked some hardware support with RDMA in it :-/
06:29 JoeJulian That would make it a bit more difficult.
06:30 JoeJulian Do you have the bug reference for him to follow?
06:30 bulde JoeJulian: should be possible. I think that current RDMA-CM approach requires IPoIB to be setup even to have RDMA working
06:31 sank bulde : what does it mean ? Do I creating volume using IPoIB and then switch back to rdma ?
06:31 sank Confused now.
06:31 bulde http://review.gluster.org/#​/c/149/21/doc/rdma-cm-in-3.4.0.txt
06:31 glusterbot <http://goo.gl/lWC1Z> (at review.gluster.org)
06:33 fidevo joined #gluster
06:33 JoeJulian sank: If I'm reading it correctly, it'll do the handshaking via IPoIB, then switch to RDMA once the information has been exchanged.
06:34 bulde JoeJulian: yep, thats right
06:34 sank so I configured infiniband hardware for IPoIB, I use these IPs to create gluster volume and mount them but I specify transport type rdma.
06:34 sank please correct me if wrong
06:35 JoeJulian That's how I would do it.
06:36 sank can IPoIB and RDMA work together ? going crazy now.
06:37 JoeJulian I'm going to go slightly out on a limb here and say definitively, yes.
06:38 sank good luck to me. Trying now that.
06:41 itisravi joined #gluster
06:43 JoeJulian god I hate pdfs.... makes it a pain in the ass to link to anything useful... http://goo.gl/VjUFZ is a pdf from a 2006 talk at Red Hat Summit. Slide 3 has a useful hierarchy.
06:47 ricky-ticky joined #gluster
06:48 CheRi joined #gluster
06:48 guigui3 joined #gluster
06:49 hagarth JoeJulian: for a moment, i thought pdfs referred to a filesystem ;)
06:49 JoeJulian lol
06:49 ekuric joined #gluster
06:49 sank JoeJulian: thanks
06:50 JoeJulian hagarth: If it were a proprietary read-only filesystem that put everything into one long string, I would hate that one too.
06:54 hagarth JoeJulian: lol
07:07 jtux joined #gluster
07:15 hybrid5121 joined #gluster
07:16 glusterbot New news from newglusterbugs: [Bug 808073] numerous entries of "OPEN (null) (--) ==> -1 (No such file or directory)" in brick logs when an add-brick operation is performed <http://goo.gl/zQN2F> || [Bug 786007] [c3aa99d907591f72b6302287b9b8899514fb52f1]: server crashed when dict_set for truster.afr.vol-client when compiled with efence <http://goo.gl/i10pz>
07:16 glusterbot New news from resolvedglusterbugs: [Bug 918052] Failed getxattr calls are throwing E level error in logs. <http://goo.gl/7yXTH>
07:25 ThatGraemeGuy joined #gluster
07:27 hybrid512 joined #gluster
07:27 andreask joined #gluster
07:31 dobber joined #gluster
07:37 vshankar joined #gluster
07:38 satheesh joined #gluster
07:43 mgebbe_ joined #gluster
07:46 glusterbot New news from newglusterbugs: [Bug 781285] [faf9099bb50d4d2c1a9fe8d3232d541b3f68bc58] improve replace-brick cli outputs. <http://goo.gl/6mwh7> || [Bug 848556] glusterfsd apparently unaware of brick failure. <http://goo.gl/rIjjW> || [Bug 858732] glusterd does not start anymore on one node <http://goo.gl/X7NsZ> || [Bug 865327] glusterd keeps listening on ipv6 interfaces for volumes when using inet familly address <http://g
07:47 mkollaro joined #gluster
07:50 ctria joined #gluster
08:07 CheRi joined #gluster
08:07 vrturbo joined #gluster
08:12 X3NQ joined #gluster
08:17 mooperd joined #gluster
08:17 vincent_vdk joined #gluster
08:26 itisravi joined #gluster
08:29 itisravi_ joined #gluster
08:35 raghug joined #gluster
08:46 anand joined #gluster
08:58 clag_ joined #gluster
09:03 itisravi joined #gluster
09:12 vimal joined #gluster
09:17 rastar joined #gluster
09:18 vpshastry1 joined #gluster
09:34 bharata joined #gluster
09:35 jimlin_ joined #gluster
09:39 pkoro joined #gluster
09:39 puebele joined #gluster
09:45 hagarth joined #gluster
09:45 chirino joined #gluster
09:46 CheRi joined #gluster
09:56 mgebbe_ joined #gluster
10:00 puebele1 joined #gluster
10:03 ngoswami joined #gluster
10:05 rastar joined #gluster
10:05 lala_ joined #gluster
10:06 vpshastry1 joined #gluster
10:11 jclift_ joined #gluster
10:12 ccha does option performance.cache-size exist for version 3.3 ? because I don't see it in "gluster volume set help"
10:17 pkoro joined #gluster
10:22 chirino joined #gluster
10:32 lala_ joined #gluster
10:32 hagarth joined #gluster
10:35 kkeithley1 joined #gluster
10:43 rcheleguini joined #gluster
10:50 lalatenduM joined #gluster
11:13 andreask joined #gluster
11:20 CheRi joined #gluster
11:38 X3NQ has anyone written an Gluster app for Observium?
11:41 spider_fingers joined #gluster
11:56 edward1 joined #gluster
12:11 StarBeast joined #gluster
12:17 kevein joined #gluster
12:18 odesport joined #gluster
12:19 odesport Hi everybody
12:19 odesport Is it possible to use glusterfs only as a shared FS, without replication ?
12:20 andreask sure
12:21 T0aD thats what I was using it for back in 2007 so definitely yes
12:21 T0aD (better performances than samba for a LAN on linux :P)
12:22 bulde joined #gluster
12:23 hagarth joined #gluster
12:23 odesport Thanks.  and what is the command to create volume without replica (gluster volume create) ?
12:32 ccha all options are listed by "gluster volume set help" ?
12:32 rwheeler joined #gluster
12:33 ccha if you doesn't see an option name, does it mean this option doesn't exist ? ie performance.cache-size for gluster version 3.3.1
12:46 odesport I 'd like help to write "gluster volume create" command with only one storage
12:46 anand joined #gluster
12:54 robos joined #gluster
12:57 jclift_ odesport: Just do the volume create command, but leave out the "replica x" option, going straight on to giving the server:/brick/path name
12:58 odesport ok
12:59 jclift_ odesport: It'll be something like this: gluster volume create test-volume myserver:/some/path
13:00 odesport thanks
13:01 mohankumar joined #gluster
13:03 deepakcs joined #gluster
13:06 matiz joined #gluster
13:31 jthorne joined #gluster
13:31 bulde joined #gluster
13:42 bdperkin joined #gluster
13:42 kedmison joined #gluster
13:44 mooperd joined #gluster
13:47 hagarth joined #gluster
13:48 kaptk2 joined #gluster
13:57 jebba joined #gluster
14:07 mooperd joined #gluster
14:13 ThatGraemeGuy joined #gluster
14:17 sgowda joined #gluster
14:21 hagarth joined #gluster
14:24 failshell joined #gluster
14:26 recidive joined #gluster
14:29 aliguori joined #gluster
14:31 guigui3 joined #gluster
14:34 bugs_ joined #gluster
14:39 jbrooks joined #gluster
14:46 anand joined #gluster
14:51 lalatenduM joined #gluster
14:51 bala joined #gluster
14:52 puebele joined #gluster
14:54 _pol joined #gluster
14:56 Technicool joined #gluster
15:09 kedmison joined #gluster
15:16 ekuric1 joined #gluster
15:21 guigui3 left #gluster
15:22 spider_fingers left #gluster
15:23 ekuric1 left #gluster
15:27 clag_ left #gluster
15:31 ctria joined #gluster
15:35 vpshastry joined #gluster
15:40 tomsve joined #gluster
15:54 glusterbot New news from resolvedglusterbugs: [Bug 832622] samba fails to delete files when "delete_on_close" is set due to inode mismatch <http://goo.gl/QCJ4u>
16:02 vpshastry joined #gluster
16:03 kedmison joined #gluster
16:11 zaitcev joined #gluster
16:21 bulde joined #gluster
16:22 al joined #gluster
16:26 raghug joined #gluster
16:27 sprachgenerator joined #gluster
16:50 joelwallis joined #gluster
16:53 harryxiyou joined #gluster
16:53 harryxiyou Hi
16:53 glusterbot harryxiyou: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
16:54 glusterbot New news from newglusterbugs: [Bug 960867] failover doesn't work when a hdd part of hardware raid massive becomes broken <http://goo.gl/6usIi> || [Bug 962878] A downed node does not show up at all in volume status output <http://goo.gl/UXegp>
16:54 harryxiyou How to deploy gluster on one node with pseudo mode?
16:56 plarsen joined #gluster
16:58 semiosis harryxiyou: the only catch is that gluster won't let you use 'localhost' or '127.0.0.1' in a brick address, so you should use the eth0 ip address
16:58 semiosis or if you are only going to connect from localhost (like if you're developing something) you could make a loopback alias
17:00 ctria joined #gluster
17:02 robos joined #gluster
17:04 harryxiyou semiosis: Thanks very much but would you please give me the installation docs for one node?
17:05 kedmison joined #gluster
17:05 semiosis harryxiyou: install the ,,(latest) glusterfs package
17:05 glusterbot harryxiyou: The latest version is available at http://goo.gl/zO0Fa . There is a .repo file for yum or see @ppa for ubuntu.
17:05 semiosis ,,(ppa) if you're on ubuntu
17:05 glusterbot The official glusterfs 3.3 packages for Ubuntu are available here: 3.3 stable: http://goo.gl/7ZTNY -- 3.3 QA: http://goo.gl/5fnXN -- and 3.4 QA: http://goo.gl/u33hy
17:06 theron joined #gluster
17:08 harryxiyou I use Debian Wheezy
17:10 daMaestro joined #gluster
17:10 semiosis then maybe you want to use the 3.4.0 beta package from debian experimental
17:10 harryxiyou The latest version supports stand alone mode?
17:11 semiosis why do you want stand alone mode?  glusterfs is a distributed filesystem... if you only want one server, use nfs
17:11 raghug joined #gluster
17:11 harryxiyou I just wanna build a stand alone gluster pool.
17:11 semiosis i dont understand
17:12 harryxiyou semiosis: Actually, I am developing Gluster Ganeti Support.
17:12 harryxiyou I just have one node in hand.
17:12 harryxiyou I have no more node here.
17:12 semiosis ok that makes sense
17:13 semiosis yes you can use glusterfs with a single server
17:13 harryxiyou But I dont know how to setup gluster with a single server.
17:13 semiosis that has been possible for a long time, though it's not recommended for regular production use
17:13 harryxiyou Would you please give me the doc?
17:13 semiosis ,,(rtfm)
17:13 glusterbot Read the fairly-adequate manual at http://goo.gl/E3Jis
17:13 semiosis the docs are there
17:14 harryxiyou I cannot find doc for a single server.
17:14 semiosis there is no doc because that's not recommended for regular users
17:14 bsaggy joined #gluster
17:14 semiosis all you need to do is install glusterfs, then create a volume with one brick, then start the volume
17:15 semiosis the readme in my java project may be helpful to you, although it has some stuff you won't need: https://github.com/semiosis/lib​gfapi-jni/blob/master/readme.md
17:15 glusterbot <http://goo.gl/U4Hqa> (at github.com)
17:16 harryxiyou Ah, you mean I just need install gluster on one node and run commands on this node, right?
17:16 semiosis yes
17:16 harryxiyou Okay, let me have a try, thanks very much ;-)
17:16 semiosis remember what i said earlier about the address used in a brick
17:16 semiosis you're welcome, good luck
17:17 semiosis and thanks for working on the integration with ganeti :)
17:17 harryxiyou semiosis: Yeah, I love storage system and Ganeti ;-)
17:17 harryxiyou semiosis: Would you please tell me the notes about address again?
17:18 semiosis harryxiyou: the only catch is that gluster won't let you use 'localhost' or '127.0.0.1' in a brick address, so you should use the eth0 ip address
17:18 hagarth joined #gluster
17:18 semiosis or if you are only going to connect from localhost (like if you're developing something) you could make a loopback alias
17:18 semiosis which is what i do in the readme file i linked: https://github.com/semiosis/lib​gfapi-jni/blob/master/readme.md
17:18 glusterbot <http://goo.gl/U4Hqa> (at github.com)
17:19 harryxiyou semiosis: Yeah, let me have a try and I would ask for help if I happen to any questions, thanks ;-)
17:19 semiosis yw
17:20 harryxiyou ;-)
17:21 raghug joined #gluster
17:24 glusterbot New news from newglusterbugs: [Bug 983676] 2.6.39-400.109.1.el6uek.x86_64 doesn't work with GlusterFS 3.3.1 <http://goo.gl/ZBvUF>
17:27 harryxiyou semiosis: I just need do as follows, right?
17:27 harryxiyou To really test this you'll need a running glusterfs volume called foo.  Set up a loopback interface alias, because glusterfs refuses to create a volume with a brick on localhost or 127.0.0.1  ifconfig lo:0 127.0.2.1 Create a glusterfs volume called foo  gluster volume create foo 127.0.2.1:/var/tmp/foo gluster volume start foo
17:28 semiosis if you do that you will only be able to connect to the volume from localhost
17:28 semiosis if you want to connect to your stand-alone server from a different client machine on the network, then use the eth0 IP or ,,(hostnames) mapped to it
17:28 harryxiyou Yeah, localhost now is 127.0.2.1
17:28 glusterbot Hostnames can be used instead of IPs for server (peer) addresses. To update an existing peer's address from IP to hostname, just probe it by name from any other peer. When creating a new pool, probe all other servers by name from the first, then probe the first by name from just one of the others.
17:30 harryxiyou You didnt write these in your readme.md, right?
17:30 harryxiyou Yeah, I see.
17:30 harryxiyou I just need do as follows.
17:31 harryxiyou ifconfig lo:0 eth0_ip gluster volume create foo eth0_ip:/var/tmp/foo gluster volume start foo
17:32 harryxiyou ifconfig lo:0 eth0_ip <br> gluster volume create foo eth0_ip:/var/tmp/foo <br> gluster volume start foo <br>
17:32 harryxiyou Am I right?
17:34 semiosis no
17:34 semiosis if you're goign to use the eth0 ip then you dont need to do ifconfig at all
17:35 semiosis just gluster volume create foo <eth0>:/var/tmp/foo
17:35 semiosis gluster volume start foo
17:36 harryxiyou I should not do "ifconfig lo:0 127.0.2.1", right?
17:36 vpshastry left #gluster
17:37 harryxiyou Yeah, I see ;-)
17:38 harryxiyou semiosis: What's the differences between volume and pool in glusterFS?
17:38 chirino joined #gluster
17:39 semiosis harryxiyou: a pool is a collection of servers that are managed as a group.
17:40 semiosis harryxiyou: a volume is a collection of bricks hosted by servers (which are all in the same pool)
17:40 semiosis ,,(glossary)
17:40 glusterbot A "server" hosts "bricks" (ie. server1:/foo) which belong to a "volume"  which is accessed from a "client"  . The "master" geosynchronizes a "volume" to a "slave" (ie. remote1:/data/foo).
17:40 semiosis a client can mount a volume from any server in the pool, because the servers all share all of the configuration information for all of the volumes in the pool
17:40 dewey joined #gluster
17:41 semiosis once a client retrieves the volume information from the ,,(mount server) it makes connections directly to the bricks
17:41 glusterbot The server specified is only used to retrieve the client volume definition. Once connected, the client connects to all the servers in the volume. See also @rrnds
17:41 semiosis also ,,(processes)
17:41 glusterbot The GlusterFS core uses three process names: glusterd (management daemon, one per server); glusterfsd (brick export daemon, one per brick); glusterfs (FUSE client, one per client mount point; also NFS daemon, one per server). There are also two auxiliary processes: gsyncd (for geo-replication) and glustershd (for automatic self-heal). See http://goo.gl/F6jqx for more information.
17:43 harryxiyou Thanks very much.
17:45 harryxiyou There is no latest (3.4 or newer) gluster package for Debian Wheezy?
17:49 harryxiyou semiosis: Is there latest gluster package for Debian wheezy?
17:49 semiosis it's in unstable: http://packages.debian.org/search?k​eywords=gluster&amp;searchon=names&​amp;suite=unstable&amp;section=all
17:49 glusterbot <http://goo.gl/gU2bq> (at packages.debian.org)
17:51 harryxiyou If I can use "glusterfs_3.3.0-1_amd64.deb" for testing?
17:51 semiosis you may be able to install my ,,(ppa) packages built for ubuntu precise as well
17:51 glusterbot The official glusterfs 3.3 packages for Ubuntu are available here: 3.3 stable: http://goo.gl/7ZTNY -- 3.3 QA: http://goo.gl/5fnXN -- and 3.4 QA: http://goo.gl/u33hy
17:51 semiosis 3.3.0 is too old
17:51 kedmison joined #gluster
17:51 semiosis try 3.4.0
17:52 harryxiyou Yeah, you mean here https://launchpad.net/~semiosis​/+archive/ubuntu-glusterfs-3.4
17:52 glusterbot <http://goo.gl/u33hy> (at launchpad.net)
17:52 semiosis yes, the precise package may work on wheezy
17:52 semiosis try it
17:53 harryxiyou I cannot add the source list. I should download the debian package directly and install it.
17:54 harryxiyou But I cannot find the link to download this package.
17:55 semiosis click "View Package Details" to get to this page: https://launchpad.net/~semiosis/+arc​hive/ubuntu-glusterfs-3.4/+packages
17:55 glusterbot <http://goo.gl/eu9Sy> (at launchpad.net)
17:55 semiosis then expand the precise build, you can download the debs then
17:55 bsaggy joined #gluster
17:57 mkollaro so I'm trying gluster for the first time, but I'm getting this error "Probe unsuccessful. Probe returned with unknown errno 107" when I'm doing the first peer probe
17:58 mkollaro I have 2 Fedora 18 machines in virt-managed cloned from each other, pingable, without selinux, port 24007 accessible trough telnet
17:58 mkollaro so I ran out of ideas about what is wrong
18:02 harryxiyou semiosis: I should download all the packages as follows
18:02 harryxiyou glusterfs-client_3.4.0beta4​-beta4.0~precise2_amd64.deb (16.1 KiB) glusterfs-client_3.4.0beta4​-beta4.0~precise2_i386.deb (16.0 KiB) glusterfs-common_3.4.0beta4​-beta4.0~precise2_amd64.deb (1.4 MiB) glusterfs-common_3.4.0beta4​-beta4.0~precise2_i386.deb (1.4 MiB) glusterfs-dbg_3.4.0beta4-b​eta4.0~precise2_amd64.deb (5.1 MiB) glusterfs-dbg_3.4.0beta4-beta4.0~precise2_i386.deb (4.7 MiB) glusterfs-server_3.4.0beta4-beta4.0~precise2_amd64
18:02 harryxiyou right?
18:02 phox I don't see why you'd need both architectures
18:04 harryxiyou Ah, I see, thanks for this silly question ;-)
18:04 harryxiyou phox: thanks for reminders ;-)
18:05 harryxiyou Do these need?   glusterfs_3.4.0beta4-beta4​.0~precise2.debian.tar.gz (10.3 KiB) glusterfs_3.4.0beta4-beta4.0~precise2.dsc (1.8 KiB) glusterfs_3.4.0beta4.orig.tar.gz (3.6 MiB)
18:07 harryxiyou semiosis: Do above three packages need?
18:07 phox you need common and server to run a server, and common and client to be a client.
18:08 phox no harm in installing client on everything, either; it's useful to be able to mount locally...
18:09 harryxiyou phox: Okay, that is to say, I just need common, server and client.
18:11 _pol joined #gluster
18:16 harryxiyou I have intalled gluster by the debian software library as foolows.
18:16 harryxiyou root@node1:~/gluster# aptitude install glusterfs-client glusterfs-dbg glusterfs-server glusterfs-common  The following NEW packages will be installed:   fuse-utils{a} glusterfs-client glusterfs-common glusterfs-dbg glusterfs-server    libibverbs1{a}  0 packages upgraded, 6 newly installed, 0 to remove and 0 not upgraded. Need to get 17.7 MB of archives. After unpacking 59.1 MB will be used. Do you want to continue? [Y/n/?] Y Get:
18:17 harryxiyou [...]
18:17 harryxiyou Processing triggers for man-db ... Setting up fuse-utils (2.9.0-2+deb7u1) ... Setting up libibverbs1 (1.1.6-1) ... Setting up glusterfs-common (3.2.7-3) ... Setting up glusterfs-client (3.2.7-3) ... Setting up glusterfs-server (3.2.7-3) ... [ ok ] Starting glusterd service: glusterd. Setting up glusterfs-dbg (3.2.7-3) ...
18:18 harryxiyou And I can also start the volume successfully as follows.
18:18 harryxiyou root@node1:~/gluster# gluster volume create foo node1:/var/tmp/foo Creation of volume foo has been successful. Please start the volume to access data. root@node1:~/gluster# gluster volume start foo Starting volume foo has been successful
18:19 harryxiyou This is just the gluster version of (3.2.7-3)
18:19 harryxiyou Not the latest 3.4 version.
18:19 harryxiyou Do they have any differences for me to realize GlusterFS Ganeti Support?
18:20 harryxiyou semiosis: Would you please give me some suggestions?
18:35 paw joined #gluster
18:40 paw Question about mirrors, there's a scenario where I have a replicated volume setup with bricks from server A and B, then, I add-brick to this volume with 2 bricks, both are from server A, when I mount this volume and try to populate it, I'm not seeing the data being distributed. The fact that 3 of the bricks are from the same node, could this be the reason?
18:40 Keawman joined #gluster
18:47 Technicool paw, that shouldn't matter....how many files have you created to test?
18:47 paw 26 files created in this scenario
18:48 paw all of the files ended up in 1 brick
18:48 Technicool thats odd
18:48 Technicool its possible, but highly improbable ;)
18:48 paw so, maybe just try creating more files?
18:49 paw when you have a 2x2 mirrored configuration, what determines how the files are distributed?
18:49 Technicool 26 should have been enough but its odd they all landed on the same brick....try creating about 1000 empty files
18:49 paw k
18:49 Technicool filename always determines where it lands
18:50 Technicool in 2x2, the hash will send files to one of the two pairs, where it will then be replicated
18:50 Technicool so creating 1000 files, about 50% should land on the first pair, 50% on the next
18:50 paw in this case, filename's like a{1,2,3,4,5,6,7,8} b{1,2,3,4,5,6,7,8} c{1,2,3,4,5,6,7,8} and file1 file 2
18:51 Technicool which is why it sounds odd that they all landed on the same brick....do you mean brick, or just the initial pair (before you added the two new bricks)
18:51 paw in this case, everything ended up written to brick 1 from the initial pair
18:54 kkeithley_ with short, similar file names it's not highly improbable that they all landed on the same brick. Use longer names to increase the odds of getting better distribution
18:55 paw I'll try that.
18:55 paw thanks!
18:58 Keawman is there a way to completely clear a geo-replication config if the slave is no longer valid?
18:58 Technicool paw, im seeing the same thing as you are, what version of gluster are you using?
18:59 Technicool i know it works if all bricks are on the same server so not sure what would cause it not to work here
19:01 neofob joined #gluster
19:02 paw 3.4.0beta2
19:03 Technicool beta4 here...
19:04 Technicool i should note that the difference is the files are distributed as expected between the initial two bricks
19:04 Keawman not sure if it helps but i have a script that creates two files each hour test and test3 and those both go to separate bricks each time
19:05 paw in the case where all bricks are on the same server, were they all created close together?
19:05 Technicool paw, no in that case they are distributed roughly evenly as expected
19:06 Technicool typically i have two pairs (2x2)
19:07 Keawman anyone have any ideas about how to clear out a geo-replication config
19:08 Technicool Keawman, it doesn't clear with a stop?
19:09 Keawman Technicool,it's been quite a while since i worked  with it and I can't find any file that will give me the names i used to set it up ...do you know where i could find that
19:10 Technicool Keawman, should be able to find it with a simple `gluster volume geo-replication status`
19:10 Technicool no need to add hostname or volume to run that command
19:11 Keawman yeah tried it's all empty
19:14 Technicool not sure i understand what it is you want to clear then?
19:16 Keawman Technicool, if i run gluster volume info  it shows geo-replication.indexing: on , but if i try to turn it off it says i can't because  geo-replication sessions exist
19:16 Technicool i see...you run all the geo-rep commands from the same host or multiple?
19:17 JoeJulian Keawman: What's the exact error message? I want to look through the source and see where that comes from.
19:17 Keawman not sure likely multiple
19:17 gluslog_ joined #gluster
19:17 Keawman volume set: failed: geo-replication.indexing cannot be disabled while geo-replication sessions exist
19:19 jebba1 joined #gluster
19:20 lanning_ joined #gluster
19:20 social_ joined #gluster
19:22 haakon_ joined #gluster
19:22 JoeJulian So that means that "volinfo->gsync_slaves->count" is > 0. Check all your servers to see if any of them are geo-rep masters. If they're not, restart all glusterd and see if that clears it.
19:24 twx joined #gluster
19:24 Keawman you mean  glustervolume geo-replication status ...and check if there is anything listed under MASTER?
19:26 wgao__ joined #gluster
19:27 JoeJulian Any of your servers /could/ be a master, if you did the "volume geo-replication blah blah start" on them. That would make them a master (see @glossary). So if any of them reference the volume you're trying to remove the index from in "volume geo-replication status", then you won't be able to remove the index until that one's stopped.
19:32 plarsen joined #gluster
19:33 mjrosenb joined #gluster
19:33 aknapp joined #gluster
19:36 lyang0 joined #gluster
19:39 mjrosenb joined #gluster
19:41 _pol joined #gluster
19:41 portante joined #gluster
19:41 cyberbootje joined #gluster
19:43 Keawman JoeJulian,well i got one server cleared with your help and finally found the names in the /var/lib/glusterd/vols/$volumename/info file
19:43 eryc joined #gluster
19:43 eryc joined #gluster
19:43 JoeJulian Excellent.
19:44 StarBeas_ joined #gluster
19:46 Cenbe_ joined #gluster
19:46 foster_ joined #gluster
19:47 snarkyboojum_ joined #gluster
19:47 edong23_ joined #gluster
19:48 jones_d_ joined #gluster
19:48 _Dave2_ joined #gluster
19:48 MinhP_ joined #gluster
19:50 recidive_ joined #gluster
19:52 _br_- joined #gluster
19:53 ultrabizweb joined #gluster
19:58 samppah bug 953887 bug 960046
19:58 glusterbot Bug http://goo.gl/tw8oW high, high, ---, pkarampu, MODIFIED , [RHEV-RHS]: VM moved to paused status due to unknown storage error while self heal and rebalance was in progress
19:58 glusterbot Bug http://goo.gl/2ZMks high, high, ---, rabhat, ON_DEV , [RHEV-RHS] vms goes into paused state after starting rebalance
19:58 kedmison joined #gluster
19:58 jbrooks joined #gluster
19:59 elfar joined #gluster
19:59 Keawman JoeJulian, the other server i restarted glusterd and still can't get it cleared any ideas
20:23 ndevos joined #gluster
20:28 Keawman i have a volume that is started on server1 and online...but on server2 it's started but not online?
20:28 Keawman anyone have any ideas
21:21 flakrat joined #gluster
21:56 mooperd joined #gluster
21:58 fidevo joined #gluster
22:24 semiosis Keawman: check the brick log file on the server where the brick isnt starting
22:25 semiosis should be messages explaining why it failed to start
22:25 semiosis brick logs are on the servers, in /var/log/glusterfs/bricks
22:32 jag3773 I'm running glusterfs 3.2.7 with replicate (1 volume, 4 bricks) on AWS linux and I'm seeing a memory, but I'm not finding anything online about it, except, "upgraded to 3.3".  Does anyone know about any memory leaks in 3.2.7 and how they may be fixed/avoided?
22:33 jag3773 *memory leak is what i mean on that first line
22:46 Keawman semiosis,thanks I decided to clean things up and rebuild the volume having to many weird issues
23:22 jebba joined #gluster
23:46 robos joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary