Camelia, the Perl 6 bug

IRC log for #gluster, 2013-04-12

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:06 robos joined #gluster
00:32 _pol joined #gluster
00:32 yinyin joined #gluster
00:33 _pol joined #gluster
00:52 _pol joined #gluster
00:53 _pol joined #gluster
00:59 _pol joined #gluster
01:01 Shdwdrgn joined #gluster
01:19 d3O joined #gluster
01:20 d3O joined #gluster
01:24 flrichar joined #gluster
01:24 flrichar joined #gluster
01:27 RobertLaptop joined #gluster
01:41 kevein joined #gluster
01:44 bala1 joined #gluster
01:55 theron joined #gluster
01:59 portante|lt joined #gluster
02:14 lh joined #gluster
02:41 lh joined #gluster
03:01 bharata joined #gluster
03:59 aravindavk joined #gluster
04:01 aravindavk joined #gluster
04:16 aravindavk joined #gluster
04:19 _pol joined #gluster
04:27 aravindavk joined #gluster
04:32 aravindavk joined #gluster
04:35 saurabh joined #gluster
04:38 aravindavk joined #gluster
04:39 raghu joined #gluster
04:41 kevein joined #gluster
04:47 aravindavk joined #gluster
04:50 mohankumar__ joined #gluster
04:53 hagarth joined #gluster
04:55 aravindavk joined #gluster
04:58 sgowda joined #gluster
05:03 pai joined #gluster
05:04 aravindavk joined #gluster
05:07 hchiramm_ joined #gluster
05:07 rastar joined #gluster
05:10 aravindavk joined #gluster
05:12 bulde joined #gluster
05:13 aravindavk joined #gluster
05:23 bala joined #gluster
05:23 deepakcs joined #gluster
05:26 sgowda joined #gluster
05:33 shylesh joined #gluster
05:41 pai joined #gluster
05:52 deepakcs joined #gluster
05:53 rgustafs joined #gluster
06:00 vshankar joined #gluster
06:07 Nevan joined #gluster
06:11 puebele joined #gluster
06:30 puebele joined #gluster
06:33 ollivera joined #gluster
06:34 mohankumar__ joined #gluster
06:38 guigui1 joined #gluster
06:42 satheesh joined #gluster
06:46 tjikkun_work joined #gluster
06:46 mohankumar__ joined #gluster
06:49 d3O joined #gluster
06:54 d3O joined #gluster
06:54 ricky-ticky joined #gluster
06:55 ekuric joined #gluster
06:57 ngoswami joined #gluster
06:57 guigui1 joined #gluster
06:59 ctria joined #gluster
07:07 satheesh joined #gluster
07:21 mohankumar__ joined #gluster
07:23 hircus joined #gluster
07:27 mohankumar joined #gluster
07:32 jkroon joined #gluster
07:33 morse joined #gluster
07:37 rotbeard joined #gluster
07:42 andreask joined #gluster
07:42 jkroon hi all, as i understand it glusterfs requires xattr's on the underlying filesystem.  to the best of my understanding my bricks do have them (I cat set/remove xattr's using setfattr -n "user.xyz" -v bar some_path, but I can't see  that gluster is setting any xattrs on the bricks anywhere?
07:42 jkroon this is with glusterfs v3.3
07:44 jkroon http://hekafs.org/index.php/2011/​04/glusterfs-extended-attributes/ seems to be the only reference I can locate on the subject.
07:44 glusterbot <http://goo.gl/Bf9Er> (at hekafs.org)
07:53 dobber joined #gluster
08:00 Norky joined #gluster
08:08 H__ jkroon: look in brick/.glusterfs/
08:12 jkroon has it moved?  because on the exact same software running a getfattr -d -m trusted brick/ shows stuff
08:12 jkroon and i don't see a .glusterfs directory anywhere
08:12 H__ 3.2 did not have that directory tree
08:14 joehoyle- joined #gluster
08:14 joehoyle joined #gluster
08:16 jkroon this is 3.3
08:18 jkroon http://pastebin.com/YnZG3gnh <-- this is what I see on the *working* system
08:18 glusterbot Please use http://fpaste.org or http://dpaste.org . pb has too many ads. Say @paste in channel for info about paste utils.
08:19 jkroon ... nm, forgot to run volume start :(
08:19 ujjain joined #gluster
08:20 jkroon strange that it would still mount and actually allow for file creation and manipulation though ...
08:25 jkroon joined #gluster
08:39 sgowda joined #gluster
08:41 sgowda joined #gluster
08:43 Nagilum_ is there a way to have glusterfs automatically resolve split brain by simply picking one side - I don't care which
08:56 georges joined #gluster
08:57 mohankumar joined #gluster
09:00 Nagilum_ documentation also has bugs http://www.gluster.org/wp-content/up​loads/2012/05/Gluster_File_System-3.​3.0-Administration_Guide-en-US.pdf : in the table with the gluster attribs, it doesn't list cluster.quorum-type and the available options for *.loglevel don't list INFO, yet it's given as the default
09:00 glusterbot <http://goo.gl/bzF5B> (at www.gluster.org)
09:01 Nagilum_ *.log-level that is
09:09 hchiramm__ joined #gluster
09:21 itisravi joined #gluster
09:31 jkroon joined #gluster
09:36 manik joined #gluster
09:43 rastar joined #gluster
09:45 karoshi joined #gluster
09:46 karoshi is it possible to set up a two-node replicated volume where a node has a brick with data and the other an empty brick and have gluster replicate the data to the empty brick?
09:51 H__ yes
09:56 sgowda joined #gluster
09:58 kbsingh karoshi: yeah, you basically add another brick and rebalance
09:59 karoshi the problem is that those are the only two bricks
09:59 karoshi and gluster won't me let create a volume with a single brick
09:59 karoshi or is it possible?
10:00 kbsingh i see what you mean
10:01 kbsingh make backups :)
10:02 karoshi it's several TB of data
10:02 karoshi I was hoping gluster would somehow allow me to avoid the pain
10:03 karoshi how do people do migration where only a single brick is available initially?
10:05 H__ rsync ?
10:06 karoshi yes, once you have another brick
10:07 karoshi let's assume in the beginning you only have one brick
10:07 karoshi (the second server isn't available yet)
10:07 karoshi how does one start?
10:07 karoshi I see that distributed volumes can have a single brick
10:07 karoshi is it possible to convert a distributed to replicated later when the second brick is available?
10:08 H__ i don't get it. you have only 1 brick and you want to start gluster on top of that. why ?
10:09 karoshi because I plan to add more bricks later
10:10 andreask you can create a single brick volume yes, and then add a second brick later to enable replication
10:10 karoshi but the hardware for the other bricks isn't available yet
10:10 karoshi how do you create a single-brick REPLICATED volume?
10:10 karoshi (or that can be turned to replicated later)
10:10 H__ hmm, ugly via union mount on itself ?
10:11 karoshi with kludges, I can probably find a way, I was hoping glusterfs allowed me to do it in a cleaner way
10:13 tryggvil joined #gluster
10:14 glusterbot New news from newglusterbugs: [Bug 951469] Gluster_File_System Administration_Guide-en-US.pdf cluster.quorum-type, log-level <http://goo.gl/dTQ8A> || [Bug 951473] cluster.min-free-disk <http://goo.gl/2NE1V> || [Bug 950024] replace-brick immediately saturates IO on source brick causing the entire volume to be unavailable, then dies <http://goo.gl/RBGOS>
10:16 andreask karoshi: simply do a ... gluster volume create yourvolume yourhost:/srv/brick01
10:16 karoshi andreask: that creates a distributed volume, not replicated
10:16 karoshi can it be converted to replicated later when I add the second brick?
10:17 samppah yes, gluster volume add-brick volname replica 2 host:/brick
10:17 andreask karoshi: yes, and later you make it replicated ... gluster volume add-brick yourvolume replica 2 secondhost:/srv/brick01
10:17 andreask yeah
10:17 karoshi but then it would be a distributed-replicated or pure replicated?
10:17 andreask replicated
10:17 karoshi ok, so it sounds good
10:18 karoshi what about healing then
10:18 karoshi is it possible to do it incrementally or in the background without hitting too much the servers?
10:20 karoshi (I'm asking because last I tried, with an old gluster version, when healing of large volume was triggered everything froze for a while)
10:20 samppah karoshi: can you remember what version that was?
10:20 samppah and how much data you had in it?
10:21 karoshi unfortunately I can't be sure about the version, but it was about 20GB of data
10:21 karoshi that is, not much data, but enoug to bring the machines to their knees if done all at once
10:22 samppah hmm.. i have been testing 3.4 alpha2 release and yesterday i healed ~100G over 10G network.. it was very smooth with that
10:23 karoshi well, I guess I'll have to try then
10:25 samppah @latest
10:25 glusterbot samppah: The latest version is available at http://goo.gl/zO0Fa . There is a .repo file for yum or see @ppa for ubuntu.
10:35 hagarth joined #gluster
10:41 edward1 joined #gluster
10:42 rastar1 joined #gluster
10:52 jkroon joined #gluster
10:54 andreask joined #gluster
10:54 pai joined #gluster
10:57 duerF joined #gluster
10:59 Nagilum_ is glusterfs 3.4 supposed to work with ext4?
11:04 dxd828 I have a gluster replicated cluster with two master nodes, each have a single brick on a xfs drive. Then I have two client servers using the mount point for lots of small files. In the last few hours the application has crashed because it could not access the files it needed. Is there anyway I can tell what is going wrong in gluster?
11:05 dxd828 I'm on a full gigabit network with very low load, all the servers are at low usage as well. I have checked the gluster client logs and can't see anything else from when I originally mounted the share.
11:07 samppah Nagilum_: i have heard that there is workaround for ext4 bug.. but i'm not sure if it's in 3.4
11:11 lalatenduM joined #gluster
11:12 andrewjs1edge joined #gluster
11:14 samppah dxd828: did i undestand correctly that there is no error messages in logs when it is not able to access files?
11:16 H__ the ext4 issue is 'only' affecting recent kernels anyway.
11:18 ollivera joined #gluster
11:21 dxd828 samppah, there are no errors in the gluster client mnt log file. But there are lots in the application which is trying the access the files, and it is not on the one client it happens to both clients within a few mins
11:22 jkroon joined #gluster
11:43 piotrektt_ joined #gluster
12:14 glusterbot New news from newglusterbugs: [Bug 918917] 3.4 Beta1 Tracker <http://goo.gl/xL9yF>
12:16 dustint joined #gluster
12:32 rastar joined #gluster
12:33 robo joined #gluster
12:34 bulde joined #gluster
12:40 bennyturns joined #gluster
12:41 morse joined #gluster
12:44 joehoyle joined #gluster
12:46 guigui joined #gluster
12:49 duerF joined #gluster
12:50 manik joined #gluster
13:02 joehoyle joined #gluster
13:04 hagarth joined #gluster
13:16 yinyin_ joined #gluster
13:19 jskinner_ joined #gluster
13:22 mister_zombie1 joined #gluster
13:25 mister_zombie1 Hi, is there any way to add replication to an existing cluster? I tried playing around the config files of my volume, to no avail. Alternate-question-that-would-fix-my-problem: How will gluster behave if I add bricks that already have data on them?
13:26 plarsen joined #gluster
13:27 Norky by data, do you mean that the additional bricks were previously part of GlusterFS volume?
13:28 mister_zombie1 I mean, I have tons of data on some volume, let's call it "A". I also have tons of data on volume "B", and it used to be a replica of volume A.
13:30 Norky so, the answer to my question is "yes"? :)
13:30 mister_zombie1 No, they are just regular disks.
13:30 mister_zombie1 Replicated through the use of periodical rsyncs.
13:30 Norky ahh, okay
13:30 mister_zombie1 It makes me cry on the inside.
13:32 mister_zombie1 Long story short, eventually the rsync cron job was deactivated and never reactivated cuz the guy in charge left, and it took me a while to realize what was not happening, and now I have a mischievous plan to install gluster and use it on all this.
13:34 ujjain joined #gluster
13:34 plarsen joined #gluster
13:35 Norky If you do a "gluster volume add-brick <VOLNAME> replica 2 serverB:/brickB" then it will replicate to that new brick. I *believe* that if the new brick contains files/directories in the same hierarchy as on the original brick/volume it will use those, i.e. it wont copy files it already has
13:35 Norky however, I'm not perfectly, and if this is company data, test the procedure first
13:35 H__ replica 2 requres that multiples of 2 bricks are added
13:36 mister_zombie1 Will do, thanks. Was not aware I could do "volume add-brick <volname> replica 2 <node>:<brick>"
13:37 Norky you can just take a random directory of stuff, rsync it to another directory on the same box, then change a few files in the orginal directorym create a single-brick volume out of the first directory then see what happens when you do a brick-add
13:37 mister_zombie1 So I can't do replica 2 when adding a brick to a one-brick-volume?
13:37 bulde joined #gluster
13:37 partner yes you can
13:37 partner exactly as was said earlier
13:38 Norky thank you partner, I thought that was the case, but I was just checking docs to confirm what I said becasue I wasn't 100% certain
13:39 mister_zombie1 Any of you guys have an explicit opposition to copy-pasting our exchange on this subject so that I can keep it as notes?
13:39 kkeithley Norky, mister_zombie1: starting with 3.3.0 you can use "replica 2" in an add-brick
13:39 mister_zombie1 That's important to know indeed, is there some way to do it before 3.3?
13:40 partner sorry yes i'm using 3.3.1 here and i've confirmed that approach to work just today last time
13:40 Norky yep, I knew that, I just wasn't absolutely certain it worked to turn a single-brick (i.e. non-replica) volume into a replicated one
13:40 kkeithley before 3.3 you have to stop your volume(s), edit your probe, hand edit your volume configs, it's not easy
13:40 mister_zombie1 kkeithley: Using EPEL 6, IIRC I only get 3.2.
13:40 kkeithley s/edit your probe/probe/
13:40 glusterbot kkeithley: Error: I couldn't find a message matching that criteria in my history of 1000 messages.
13:41 Norky use the gluster repository
13:41 partner Norky: i even repeated it 5 min ago on our test env but for some reason i managed to loose the "picture of pre-state" :o
13:41 kkeithley @yum repos
13:41 glusterbot kkeithley: I do not know about 'yum repos', but I do know about these similar topics: 'yum repository'
13:41 kkeithley @yum
13:41 glusterbot kkeithley: I do not know about 'yum', but I do know about these similar topics: 'yum repo', 'yum repository', 'yum33 repo', 'yum3.3 repo'
13:41 Norky EPEL is a bit out of date
13:41 kkeithley @yum33 repo
13:41 glusterbot kkeithley: kkeithley's fedorapeople.org yum repository has 32- and 64-bit glusterfs 3.3 packages for RHEL/Fedora/Centos distributions: http://goo.gl/EyoCw
13:41 Norky damn you glusterbot and your lack of fuzzy matching!
13:41 kkeithley if you use my fedorapeople.org yum repo you can get 3.3.1
13:42 mister_zombie1 kkeithley: It was mentionned yesterday that you also package the EPEL?
13:42 kkeithley yes, there are rpms for EPEL in my repo
13:44 mister_zombie1 Meh. Anyway. If I have a hard requirement for version, they'll have no choice but give it to me, them network guys.
13:45 plarsen joined #gluster
13:47 partner but, got a question. is it normal to get a "transport endpoint is not connected" few times at commit when moving brick from server to another using replace-brick ?
13:47 partner "at commit time" - on client, was doing few df commands there and got the error 3-4 times (maybe 10 second window)
13:47 Norky that's an error from the FUSE module I think....
13:48 chirino joined #gluster
13:48 Norky what the underlying cause is, I don't know...
13:50 partner googled a bit around but mostly got hits to ancient docs so thought to ask it here
13:51 partner i guess i haven't noticed it earlier as i mostly have replicated volumes
14:07 jiffe98 joined #gluster
14:11 manik joined #gluster
14:23 sefz joined #gluster
14:23 karoshi joined #gluster
14:25 H__ Would this be a proper way to seed a target brick (bypassing gluster) ? rsync -xavHXW --numeric-ids --delete --inplace src dst
14:25 lh joined #gluster
14:25 lh joined #gluster
14:27 semiosis transport endpoint is not connected means that the brick(s) needed for an operation are missing (disconnected)
14:27 semiosis check your client log
14:27 semiosis for details
14:28 semiosis H__: seems reasonable, try it
14:29 karoshi so I created a single-brick volume by doing: gluster volume create myvolume server1:/brick, which created a distribute volume
14:29 semiosis there's really no point to a single brick volume imho
14:30 karoshi then, as instructed to do here, I added another brick with gluster volume add-brick myvolume replica 2 server2:/brick
14:30 karoshi (this is a test, but it's what we'll have to do on the real system)
14:30 karoshi the newly added brick was empty
14:30 karoshi and I was expecting gluster to replicate _existing_ data to the new brick
14:31 karoshi which isn't happening
14:31 karoshi am I doing something wrong?
14:31 semiosis where'd you get this "instruction" ?
14:31 karoshi in this very channel
14:31 semiosis from whom?
14:31 semiosis when?
14:31 karoshi two hours ago or so
14:31 * semiosis scrolls back
14:31 karoshi let me see
14:32 karoshi http://irclog.perlgeek.de/gluster/2013-04-12
14:32 H__ semiosis: I am, it's running and I expect it will take around 2 days for my 1 TiB test brick
14:32 karoshi andreask
14:32 glusterbot Title: IRC log for #gluster, 2013-04-12 (at irclog.perlgeek.de)
14:33 semiosis karoshi: thanks i read it, you got a lot of advice from several people, not all consistent, not all correct imo :)
14:34 robos joined #gluster
14:34 semiosis lets rule out rebalance right away, that's not a replicating feature, won't help you sync up your two replicas
14:34 semiosis karoshi: what version of glusterfs are you using?
14:34 karoshi basically what I'm trying to do is:
14:35 karoshi starting from a single brick with data, later add another brick to create a 2-bricks replica, without having to copy the data myself
14:35 karoshi 3.3.1
14:36 semiosis gluster volume add-brick $volname replica 2 $newbrick; gluster volume heal $volname full
14:36 semiosis should do it
14:36 bugs_ joined #gluster
14:36 semiosis you should try this out on a test cluster
14:37 karoshi yes, I'm trying on test machines
14:37 semiosis great
14:37 karoshi the real one has several TB of data
14:37 karoshi :)
14:39 karoshi does not seem to be doing anyhing
14:39 karoshi ie, I don't see data appearing on the second brick
14:39 semiosis there is an undocumented option you can set to reduce the number of files healing in parallel in the background, cluster.background-self-heal-count, to reduce load during the sync
14:40 semiosis also try changing the heal algorithm, that is a documented option, see 'gluster volume set help' for details.  for me, full is better than diff, but ymmv
14:40 karoshi great, but I guess that's useful when _something_ is being healed :)
14:40 semiosis like i said before, 'gluster volume heal $volname full', to cause a full heal on the volume
14:41 karoshi of course, I've done that
14:41 semiosis and nothing is healing?
14:41 karoshi I don't see files appearing on the second brick
14:41 karoshi let me try to redo everything from the beginning
14:42 semiosis check logs, i'd guess name resolution or iptables are getting in the way
14:42 partner semiosis: ref transport thingy, was just wondering. i guess its just not possible to do the switchover without interruption
14:42 semiosis partner: well, that *should* be possible
14:44 partner semiosis: that was the next thing i would have asked, ie. what steps can i make to ensure i would have it as transparent as possible. this time the use case was to move the one and only brick of one volume to other server, it was even empty totally, sized 8 TB and it took quite long to get back in touch with it
14:46 semiosis standard troubleshooting applies... check logs, diagnose problems (network?), fix problems
14:47 karoshi now after removing the volum and trying to start over, I get "/brick or a prefix of it is already part of a volume"
14:47 glusterbot karoshi: To clear that error, follow the instructions at http://goo.gl/YUzrh or see this bug http://goo.gl/YZi8Y
14:52 partner semiosis: its just, i am not aware of having a problem but a standard behaviour.. but if it "should" be transparent then i'll have a closer look
14:54 Nagilum_ OT: how can I find out from which repository / url a certain rpm was installed? (assuming the repository is still available/enabled)
14:54 karoshi doh, now it's working
14:54 karoshi healing
14:56 karoshi ah, and I see it doesn't move big stuff until the client actually accessess them, which is good
14:57 karoshi semiosis: thanks
14:57 semiosis karoshi: yw
15:04 nueces joined #gluster
15:09 awickhm joined #gluster
15:17 robinr joined #gluster
15:20 kkeithley @ports
15:20 glusterbot kkeithley: glusterd's management port is 24007/tcp and 24008/tcp if you use rdma. Bricks (glusterfsd) use 24009 & up. (Deleted volumes do not reset this counter.) Additionally it will listen on 38465-38467/tcp for nfs, also 38468 for NLM since 3.3.0. NFS also depends on rpcbind/portmap on port 111.
15:21 wN joined #gluster
15:21 jbrooks joined #gluster
15:22 _pol joined #gluster
15:23 robinr hi, once in a while in my volumes with replica 2, especially after a big rsync, there will be a file that gluster complains that it is out of sync (according to gluster volume heal VOLUME info); usually, i checked that the file is no longer open and the file check sums on both servers are the same. I typically clear the error by using Gluster Client, cp the file to new file; remove the file in question and then rename it back to old file. This is Gluster-3.
15:23 robinr (can't go to Gluster-3.3.1 because of NFS issues I encountered; bug report has been filed with core dumps and logs). I'm exploring are there any ways for me to clear the files safely by just setting the file attributes or via some other mechanism) ?
15:23 _pol joined #gluster
15:24 ricky-ticky joined #gluster
15:24 duerF joined #gluster
15:25 bulde joined #gluster
15:25 sjoeboo_ joined #gluster
15:26 _pol joined #gluster
15:27 _pol joined #gluster
15:35 Supermathie Nagilum_: rpm -qi <packagename>
15:36 Nagilum_ Supermathie: that doesn't say anything about the repo
15:38 Supermathie Nagilum_: Not directly, but usually you can infer it from Packager, etc.
15:40 Supermathie Nagilum_: Ahah! yum info <package> tells you the repo.
15:42 Nagilum_ right, that looks better!
15:52 NeonLicht joined #gluster
15:54 NeonLicht Hello.  I have a striper volume on 5 bricks which is working fine.  When I try adding a new (6th) brick, I get the 'Operation failed on UUID' (it doesn't say UUID, but an actual UUID number).  I've replaced the hard drive with a new one and the same happens.  Any help on how I can check what is actualy happening, please?
16:01 Supermathie NeonLicht: Stripe width 5? Think you need to add 5 drives at a time. (don't know if you can volume add-brick VOL stripe 6 NEWBRICK)
16:01 Supermathie <--- new user of gluster but it sounds like that's your problem
16:03 NeonLicht Supermathie, yes, you can.  Actually I've added all 5 one by one before.
16:04 NeonLicht It's type distribute, sorry.
16:07 Supermathie NeonLicht: Ah that's different :) Checked the logs?
16:09 NeonLicht Supermathie, I'm looking at them, but I don't know where to look, that's why I'm asking what I should look for exactly.
16:09 lh joined #gluster
16:12 semiosis NeonLicht: check 'gluster peer status' on *all* of your servers, look for ,,(peer rejected)
16:12 glusterbot NeonLicht: I do not know about 'peer rejected', but I do know about these similar topics: 'peer-rejected'
16:12 semiosis yes, ,,(peer-rejected)
16:12 glusterbot http://goo.gl/nWQ5b
16:12 NeonLicht semiosis, it's a local brick.
16:12 semiosis NeonLicht: i can't stress enough that you check 'gluster peer status' on *each*and*every* server
16:13 semiosis i dont understand "it's a local brick" maybe if you ,,(pasteinfo) i will get it
16:13 glusterbot Please paste the output of "gluster volume info" to http://fpaste.org or http://dpaste.org then paste the link that's generated here.
16:14 NeonLicht semiosis, http://pastie.org/7465792
16:14 glusterbot Title: #7465792 - Pastie (at pastie.org)
16:15 NeonLicht I mean the brick is local to the server I'm trying to add the brick from.
16:18 lh joined #gluster
16:18 lh joined #gluster
16:19 Supermathie NeonLicht: can you paste the add command and output?
16:20 NeonLicht Supermathie, http://pastie.org/7465838
16:20 glusterbot Title: Private Paste - Pastie (at pastie.org)
16:21 duerF joined #gluster
16:24 bulde joined #gluster
16:24 Supermathie nothing springs to mind, (I doubt it's as simple as removing the trailing slash). Try running the command again with 'tail --follow=name /var/log/gluster/*.log' running.
16:26 NeonLicht Supermathie, removing the trailing slash gives me a 'wrong brick type' error.
16:26 Supermathie try rmdir /mnt/sdf2/gluster/ and then re-adding it w/o slash
16:26 Supermathie (after remaking directory :p )
16:29 NeonLicht rm'd dir, re-created it, tried again, same error.
16:29 NeonLicht There is something in the logs, though: [2013-04-12 18:27:44.759043] W [socket.c:1494:__socket_proto_state_machine] 0-socket.management: reading from socket failed. Error (Transport endpoint is not connected), peer (127.0.0.1:1000)
16:29 glusterbot NeonLicht: That's just a spurious message which can be safely ignored.
16:32 Mo___ joined #gluster
16:43 semiosis NeonLicht: you only have *one* server?
16:44 NeonLicht No, semiosis, I have some more, but I'm not using any bricks from them at the moment.
16:45 semiosis irrelevant
16:45 semiosis i tried to stress -- twice -- the importance of running 'gluster peer status' on *all* servers, looking for peer rejected
16:46 semiosis no one ever listens when I recommend that, no matter how hard i try to emphasize it :(
16:46 NeonLicht LOL
16:46 semiosis happens every time
16:46 ctria joined #gluster
16:47 semiosis actually anything other than "Peer in cluster (connected)" is a problem
16:48 NeonLicht I see... checking...
16:49 Supermathie if A can't tell B "Hey, I'm adding this brick" it's going to fail
16:50 Supermathie semiosis: Nobody listens to good advice. That's what makes it so useful - nobody knows it :)
16:51 NeonLicht OK, semiosis, that's my problem too... I have another server which isn't responding.
16:51 NeonLicht When I probe from the other server, it says the other one is already part of another cluster.
16:52 NeonLicht The problem has been originated because this second servewr has been reinstalled from scratch, and therefore has lost all the gluster config.
16:52 Supermathie Those kinds of details are useful to mention up front...
16:53 semiosis NeonLicht: maybe these ,,(replace) links will help...
16:53 glusterbot NeonLicht: Useful links for replacing a failed server... if replacement server has different hostname: http://goo.gl/4hWXJ ... or if replacement server has same hostname:
16:53 glusterbot http://goo.gl/rem8L
16:53 NeonLicht Thanks, semiosis, reading it...
16:53 semiosis Supermathie: it's even more important to mention that kinda thing up front if one intends to disregard diagnostic advice/inquiries ;)
16:54 NeonLicht Sorry, guys, I had completely disregarded the other server, since it wasn't being used at all for the volume.
16:54 NeonLicht I couldn't imagine that could affect this particular volume at all.
16:55 semiosis pretty much everything you do with the gluster command requires syncing all servers
16:55 semiosis data ops usually go directly to the respective bricks, but cluster config ops are synced across all servers
16:56 NeonLicht I see, semiosis, I won't forget that!  :-)
16:56 semiosis (metadata ops, like directory creation/deletion on the other hand are synced across all bricks)
16:57 NeonLicht Understood!
16:57 NeonLicht Thanks a lot... reading the article and 'replacing' the server.
16:58 semiosis great, that should get you most of the way there.  you may need to restart the glusterd process on your servers a couple times, i'm not sure why but sometimes you have to do that more than other times
17:00 duerF joined #gluster
17:00 NeonLicht Noted, thanks!
17:08 NeonLicht It worked, semiosis, Supermathie.  Thank you very much!  I've been reading and trying things for a couple of days and couldn't find asolution.  You've saved my day!!  :-)
17:12 awickhm joined #gluster
17:16 __Bryan__ joined #gluster
17:28 sjoeboo_ joined #gluster
17:34 portante a2, avati: checkout http://starlogs.net/#gluster/glusterfs
17:34 glusterbot Title: Starlogs (at starlogs.net)
17:35 portante kkeithley: ^^^
17:35 kkeithley that's sick
17:35 Guest83265 left #gluster
17:36 portante ;)
17:36 JoeJulian I'm like... ok... then I remembered I turn off javascript by default.
17:40 bennyturns joined #gluster
17:49 rgustafs joined #gluster
17:50 rwheeler joined #gluster
17:50 mister_zombie1 left #gluster
17:53 lh joined #gluster
17:58 jdarcy joined #gluster
18:04 robo joined #gluster
18:06 t35t0r joined #gluster
18:16 glusterbot New news from resolvedglusterbugs: [Bug 906966] Concurrent mkdir() system calls on the same directory can result in D-state hangs <http://goo.gl/vQ15o>
18:16 glusterbot New news from newglusterbugs: [Bug 951661] Avoid calling fallocate(), even when supported, since it turns off XFS's delayed-allocation optimization <http://goo.gl/MVdqz> || [Bug 895528] 3.4 Alpha Tracker <http://goo.gl/hZmy9> || [Bug 949242] Introduce fallocate support <http://goo.gl/Bwh9K>
18:20 semiosis NeonLicht: glad we could help
18:25 jbrooks joined #gluster
18:30 tryggvil joined #gluster
18:33 NeonLicht semiosis  :-)
18:50 _pol_ joined #gluster
18:52 andreask joined #gluster
18:57 rwheeler_ joined #gluster
19:03 _pol joined #gluster
19:05 Supermathie ndevos: Around?
19:09 premera_a joined #gluster
19:09 Unidentified4773 joined #gluster
19:12 bulde joined #gluster
19:22 NeonLicht I have another question.  I have a server which is running an old version of Ubuntu (I'll be upgrading it to Debian in the near future) and GlusterFS 2.0.2.  I'm running GlusterFS 3.2.7 in all the other servers.  Would it be possible to at least mount some volumes on the old system at all or should I just forget about it and wait until I install Debian, please?
19:23 awickhm_ joined #gluster
19:30 tryggvil joined #gluster
19:33 semiosis NeonLicht: probably not possible
19:34 NeonLicht OK, semiosis, I won't spent any time trying then.  I'll wait until all services are migrated to another server to reinstall and join that box to the cluster.  Thank you.
19:34 semiosis yw
19:35 JoeJulian ... you could nfs mount...
19:35 NeonLicht s/spent/spend/
19:35 glusterbot What NeonLicht meant to say was: OK, semiosis, I won't spend any time trying then.  I'll wait until all services are migrated to another server to reinstall and join that box to the cluster.  Thank you.
19:35 semiosis yes ,,(nfs)
19:35 glusterbot To mount via nfs, most distros require the options, tcp,vers=3 -- Also an rpc port mapper (like rpcbind in EL distributions) should be running on the server, and the kernel nfs server (nfsd) should be disabled
19:35 NeonLicht Yes, LOL, that's exactly what I meant!
19:36 NeonLicht Yes, JoeJulian, thank you.  That's what I'm doing, but I was wondering if I could use the glusterfs protocol instead of NFS.
19:45 nueces joined #gluster
19:52 bulde joined #gluster
20:04 Supermathie I don't suppose anybody's familiar enough with the xdr decoding code in gluster to help me? :)
20:12 sjoeboo_ joined #gluster
20:56 CROS_ joined #gluster
21:12 H__ joined #gluster
21:12 H__ joined #gluster
21:23 rwheeler joined #gluster
21:51 ndevos Supermathie: ah, I've thought of the error in my suggestion - that function is used to encode and decode
21:52 ndevos which means, that the size parameter must be set correctly - the .data_len attribute contain the size when encoding, and should be overwritten when decoding
21:53 ndevos KERBOOM happens when an idea is only half looked at :-/
21:54 tryggvil joined #gluster
22:28 duerF joined #gluster
22:34 fidevo joined #gluster
22:48 duerF joined #gluster
23:37 duerF joined #gluster
23:55 duerF joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary