Camelia, the Perl 6 bug

IRC log for #gluster, 2013-10-15

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:04 JoeJulian @ports
00:04 glusterbot JoeJulian: glusterd's management port is 24007/tcp and 24008/tcp if you use rdma. Bricks (glusterfsd) use 24009 & up for <3.4 and 49152 & up for 3.4. (Deleted volumes do not reset this counter.) Additionally it will listen on 38465-38467/tcp for nfs, also 38468 for NLM since 3.3.0. NFS also depends on rpcbind/portmap on port 111 and 2049 since 3.4.
00:07 sprachgenerator joined #gluster
00:25 verdurin_ joined #gluster
00:42 sprachgenerator joined #gluster
01:01 xavih joined #gluster
01:16 sprachgenerator joined #gluster
01:22 kshlm joined #gluster
01:44 vynt joined #gluster
01:47 asias joined #gluster
01:57 jag3773 joined #gluster
02:05 nasso joined #gluster
02:49 aik__ joined #gluster
02:53 Shri joined #gluster
03:08 sgowda joined #gluster
03:11 shubhendu joined #gluster
03:12 edong23 joined #gluster
03:36 davinder joined #gluster
03:45 bala joined #gluster
03:46 sgowda joined #gluster
03:49 kanagaraj joined #gluster
03:52 vynt joined #gluster
03:52 itisravi joined #gluster
04:13 vpshastry joined #gluster
04:16 raghu joined #gluster
04:16 kshlm joined #gluster
04:23 shireesh joined #gluster
04:26 zerick joined #gluster
04:29 RameshN joined #gluster
04:32 nueces joined #gluster
04:38 zerick joined #gluster
04:41 hagarth joined #gluster
04:42 dusmant joined #gluster
04:54 vynt joined #gluster
04:58 vpshastry joined #gluster
04:59 shireesh joined #gluster
05:00 saurabh joined #gluster
05:04 mohankumar joined #gluster
05:05 meghanam joined #gluster
05:05 meghanam_ joined #gluster
05:19 anands joined #gluster
05:26 bharata-rao joined #gluster
05:27 CheRi joined #gluster
05:32 rjoseph joined #gluster
05:32 bala1 joined #gluster
05:48 lalatenduM joined #gluster
05:48 dusmant joined #gluster
05:51 vpshastry1 joined #gluster
05:58 yinyin joined #gluster
06:02 zerick joined #gluster
06:12 kPb_in joined #gluster
06:14 saurabh joined #gluster
06:16 raghu joined #gluster
06:23 sgowda joined #gluster
06:23 jtux joined #gluster
06:26 aravindavk joined #gluster
06:26 Shri853 joined #gluster
06:32 davinder joined #gluster
06:35 bkrram joined #gluster
06:40 vimal joined #gluster
06:40 MediaSmurf left #gluster
06:46 sgowda joined #gluster
06:55 ekuric joined #gluster
07:01 eseyman joined #gluster
07:04 mohankumar joined #gluster
07:04 ctria joined #gluster
07:06 dusmant joined #gluster
07:11 MrNaviPacho joined #gluster
07:15 Dga joined #gluster
07:21 blook joined #gluster
07:21 bkrram test
07:30 keytab joined #gluster
07:30 vpshastry joined #gluster
07:38 mgebbe joined #gluster
07:39 saurabh joined #gluster
07:39 bala joined #gluster
07:41 giannello joined #gluster
07:45 tryggvil joined #gluster
07:48 andreask joined #gluster
07:53 tryggvil joined #gluster
08:10 Debolaz joined #gluster
08:32 Maxence joined #gluster
08:37 atrius joined #gluster
08:40 ricky-ticky joined #gluster
08:42 Rocky__ left #gluster
08:47 Maxence_ joined #gluster
08:58 glusterbot New news from resolvedglusterbugs: [Bug 918052] Failed getxattr calls are throwing E level error in logs. <http://goo.gl/7yXTH>
09:04 bkrram1 joined #gluster
09:07 bala joined #gluster
09:07 shruti joined #gluster
09:19 an joined #gluster
09:23 bala1 joined #gluster
09:25 srsc joined #gluster
09:26 cweiske joined #gluster
09:30 alcir joined #gluster
09:30 alcir hi
09:30 glusterbot alcir: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
09:31 alcir maybe this is a repeated question: what about glusterfs on solaris, openidiana, smartos, illumos and so on?
09:34 morse joined #gluster
09:35 Maxence joined #gluster
09:37 hybrid5121 joined #gluster
09:39 mohankumar joined #gluster
09:40 kkeithley joined #gluster
09:44 mjrosenb dht is supposed to take into account how much free space there is on each brick, right?
09:50 tryggvil joined #gluster
09:54 itisravi joined #gluster
10:05 kanagaraj_ joined #gluster
10:07 satheesh1 joined #gluster
10:07 shruti joined #gluster
10:07 hagarth1 joined #gluster
10:09 bala joined #gluster
10:14 shubhendu joined #gluster
10:17 RameshN joined #gluster
10:21 dusmant joined #gluster
10:24 tziOm joined #gluster
10:25 aravindavk joined #gluster
10:26 rjoseph joined #gluster
10:27 F^nor joined #gluster
10:27 vynt joined #gluster
10:28 eseyman joined #gluster
10:30 aik__ joined #gluster
10:32 sammmm joined #gluster
10:39 ctria joined #gluster
10:41 edward1 joined #gluster
10:45 jcsp joined #gluster
10:49 Zylon joined #gluster
10:58 ctria joined #gluster
11:07 kkeithley1 joined #gluster
11:14 jtux joined #gluster
11:29 ricky-ticky joined #gluster
11:29 andreask joined #gluster
11:48 ekuric joined #gluster
11:49 ricky-ticky joined #gluster
11:53 anands joined #gluster
11:54 santir joined #gluster
11:56 ricky-ticky joined #gluster
12:10 itisravi_ joined #gluster
12:16 jclift joined #gluster
12:22 bala joined #gluster
12:27 Zylon joined #gluster
12:28 RameshN joined #gluster
12:29 cweiske left #gluster
12:34 go joined #gluster
12:43 Guest15173 Hi guys, I'm having problems with geo-replication on 3.4.1 - when I start the geo-rep - I'm getting the following error in the logfile: OSError: [Errno 95] Operation not supported
12:44 go2k which is followed by [repce:188:__call__] RepceClient: call 10689:139851330328320:1381841049.02 (xtime) failed on peer with OSError
12:59 B21956 joined #gluster
12:59 bennyturns joined #gluster
13:01 B21956 left #gluster
13:02 ekuric left #gluster
13:04 ccha2 I got this message "Creating /var/lib/nfs/state, set initial state 1
13:04 bkrram joined #gluster
13:04 ccha2 from the daemon sm-notify
13:04 ccha2 what's that ?
13:07 dusmant joined #gluster
13:07 shubhendu joined #gluster
13:10 P0w3r3d joined #gluster
13:19 go2k guys, does anyone know how to fix an error 95 in geo-replication ?
13:19 kkeithley_ rtfm?  sm-notify - send reboot notifications to NFS peers. It's part of the nfs-utils package. Nothing to do with gluster.
13:19 go2k there is no documentation about it whatsoever and the debug does not tell me much...
13:19 chirino joined #gluster
13:19 go2k OSError: [Errno 95] Operation not supported
13:20 dbruhn go2k, I think he was referencing cccha2's question.
13:22 go2k yeah, I know, we just started writing at the same time
13:26 kkeithley_ correct, intended for ccha2. When an NFS client reboots the NFS server must release all file locks held by apps running on the client.
13:32 kkeithley_ as for EOPNOTSUPP (error 95), dunno. a quick look at geo-rep gsyncd source doesn't reveal anything, i.e. no explicit references to EOPNOTSUPP. Would need a backtrace or more of the log preceding the error and probably higher log level.
13:33 kkeithley_ IOW need to figure out what the gsyncd called in the Python library that's getting the error.
13:36 bkrram joined #gluster
13:38 go2k I guess it can not access some files
13:40 go2k http://pastebin.com/5nRaTkHr
13:40 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
13:40 go2k this is how it looks like
13:41 go2k or here, as glusterbot says
13:41 go2k
13:41 go2k http://fpaste.org/46851/81844477/
13:41 glusterbot Title: #46851 Fedora Project Pastebin (at fpaste.org)
13:42 go2k anyone any clues?
13:49 andreask go2k: on the slave is another gluster volume or only a local directory?
13:52 ThatGraemeGuy joined #gluster
13:54 kaptk2 joined #gluster
14:01 [o__o] left #gluster
14:03 [o__o] joined #gluster
14:04 ThatGraemeGuy joined #gluster
14:05 onny1 joined #gluster
14:07 Shri joined #gluster
14:08 go2k andreask: I have just one master, where I mount it locally
14:08 go2k so I have a glusterfs mount on master
14:08 go2k but I replicate the raw data
14:09 go2k I ahve the mount at /mnt/data, but I replicate /data -> non-gluster-filesystem on slave
14:09 go2k to a regular directory
14:10 danci1973 joined #gluster
14:12 jag3773 joined #gluster
14:16 sjoeboo joined #gluster
14:17 gmcwhistler joined #gluster
14:17 sjoeboo joined #gluster
14:20 wushudoin joined #gluster
14:23 stickyboy joined #gluster
14:24 failshell joined #gluster
14:25 onny1 is it possible to configure a volume with transport type rdma and some kind of tcp fallback?
14:28 Remco rdma is currently broken: https://bugzilla.redhat.co​m/show_bug.cgi?id=1017176
14:28 glusterbot <http://goo.gl/viLDZq> (at bugzilla.redhat.com)
14:28 glusterbot Bug 1017176: high, unspecified, ---, kaushal, NEW , Until RDMA handling is improved, we should output a warning when using RDMA volumes
14:28 Remco (don't know the answer to your question)
14:28 wahly joined #gluster
14:29 wahly if i have multiple copies of a file in my gluster cluster (say 3 copies and 6 servers), how does gluster determine which server to get the file from?
14:32 kkeithley_ wahly: client sends a lookup to all (three) servers. The first response the client receives is the one the client will read from. The client will also read from the local server if it happens that the client is on the server.
14:32 wahly kkeithley_: perfect. exactly the info i was looking for. thank you!
14:33 bala joined #gluster
14:34 kkeithley_ client will read from the local server starting with 3.4.
14:34 bugs_ joined #gluster
14:35 Norky kkeithley, and presumably the client uses the hashing algorithm to determine which 3 servers out of the 6 the file is (or should be) on?
14:35 Norky to check my own understanding
14:36 kkeithley_ Norky: correct
14:37 wahly i'm not sure if my google-fu is weak or documentation needs to be updated with that info somewhere, but i had a hard time coming with that info
14:38 vpshastry joined #gluster
14:38 kkeithley_ I believe you might be able to find it in Jeff Darcy's blogs on hekafs.org
14:40 jbrooks Hey guys, I have a replicated gluster volume across two nodes. How do I know if it's all replicated? Versus, still catching up
14:40 Norky would gluster benefit from some more detailed user-level documentation?
14:40 kkeithley_ HekaFS was the cloud file system Jeff and I built on top of GlusterFS. The features of HekaFS are slowly being added to Gluster.
14:41 kkeithley_ Norky: absolutely. I believe there's an effort underway on forge.gluster.org. Feel free to pitch in and help
14:41 Norky I'm wondering if I have sufficient understand be of any help
14:41 Norky but I suppose one person with half a clue is better than none :)
14:42 kkeithley_ Every little bit helps. Dip your toe in, the water's warm. ;-)
14:42 go2k jbrooks: is that geo or regular gluster replication ?
14:42 jbrooks Regular
14:42 go2k there is no way so far :D
14:42 jbrooks Ah
14:42 jbrooks hmm
14:42 go2k (none that I know if)
14:42 go2k I don't want to give you misleading info, but if you find out let me know too :)
14:43 jbrooks Ok -- I'm trying to debug some issues I'm having w/ ovirt glusterfs domains
14:43 jbrooks will do
14:43 andreask joined #gluster
14:44 jvyas joined #gluster
14:44 johnmark jbrooks: it's synchronous - all replicas are written at the same time
14:44 johnmark jbrooks: what are you seeing?
14:44 johnmark there's no utility to check on the synchronous replication status, because it should never have a lag
14:45 jbrooks johnmark: migration doesn't work for me, and I don't know why
14:45 johnmark jbrooks: hrm. ok. that I don't know
14:45 jbrooks although, now that I think if it, it doesn't work w/ a one brick distributed volume either
14:45 johnmark weird, ok
14:46 kkeithley_ I believe `gluster volume heal $volname info` is what you're looking for
14:46 jbrooks All right, and zero entries means nothing left to replicate?
14:48 andreask1 joined #gluster
14:48 johnmark jbrooks: right. that command is only for instances where a replica had to heal - which is what's supposed to happen when one replica is unreachable, for whatever reason
14:48 kkeithley_ The only reason a "replica N" volume would be out of sync is if there's a network cut or one of the machines is down. After the network is restored and/or the machine comes back on line self heal will start automatically and you can monitor it with the command I showed above
14:51 kkeithley_ If you try to check heal info on a single brick volume, or any other non-replica volume that heal info command will simply tell you it's not a replica volume
14:52 jbrooks Catching up on the ovirt ml thread on this. Looks like this is the problem: https://bugzilla.redhat.com/show_bug.cgi?id=987555
14:52 glusterbot <http://goo.gl/SbL8x> (at bugzilla.redhat.com)
14:52 glusterbot Bug 987555: medium, urgent, 3.4.2, rgowdapp, NEW , Glusterfs ports conflict with qemu live migration
14:56 viktorkrivak joined #gluster
14:58 daMaestro joined #gluster
15:02 kkeithley_ migration doesn't work, as in libvirt live migration? And a single brick distributed volume?  So nothing to do with gluster replication after all. Yes, that bug, which is really a bug in libvirt, fwiw, would be the cause.
15:05 kkeithley_ the libvirt people are working on a real fix, and we'll have a work-around for it Real Soon®
15:06 jbrooks That must be it. But confusingly, some people are reporting success with this
15:06 kkeithley_ we'll have a work-around for it in Gluster Real Soon®
15:06 jvyas what is that (r)
15:06 jvyas how do you do that?
15:06 jvyas Gluster
15:06 kkeithley_ I'm magic. ;-)
15:06 jvyas hahaha
15:07 * jvyas experimenting
15:07 * ndevos presses AltGr+shift+r: ®
15:07 onny1 has anyone every tried to monitor a glusterfs client/server with zabbix?
15:07 kkeithley_ ndevos is magic too
15:08 kkeithley_
15:08 kkeithley_
15:08 ndevos hmm, no such magick here
15:09 andreask joined #gluster
15:09 kkeithley_ ²
15:09 ndevos ¼
15:09 kkeithley_ ¼
15:09 kkeithley_ ½
15:09 kkeithley_ ¾
15:09 johnmark onny1: there's a project to do that - https://forge.gluster.org/glubix
15:09 glusterbot Title: Glubix - Gluster Community Forge (at forge.gluster.org)
15:09 jclift Google for unicode html letters.  You'll find heaps. ;)
15:09 kkeithley_ pfft, unicode
15:10 johnmark onny1: I haven't used it, so I would be interested in seeing how well it works for you
15:10 kkeithley_ this isn't htmol
15:10 kkeithley_ *html
15:10 kkeithley_ This is UTF-8 local and input methods ftw
15:10 jclift kkeithley: There are some nifty ones.
15:10 kkeithley_ *locale
15:10 jclift kkeithley: http://www.fileformat.info/inf​o/unicode/char/1f37a/index.htm
15:10 glusterbot <http://goo.gl/Xy3xAJ> (at www.fileformat.info)
15:10 zaitcev joined #gluster
15:11 kkeithley_ Yeah, Unicode is everything and the kitchen sink too.
15:12 onny1 johnmark: very interesting link, thank you
15:12 johnmark onny1: you bet. I'm very interested in hearing feedback on that
15:13 jvyas ALT+SHITF+r?
15:13 kkeithley_ keep it clean, this is a family friendly channel
15:14 wahly left #gluster
15:14 bala joined #gluster
15:15 kkeithley_ You need a key name Multi_key on your keyboard, often referred to as AltGr. Use xmodmap to change one of the keybindings to Multi_key.
15:16 viktorkrivak left #gluster
15:16 phox joined #gluster
15:16 kkeithley_ Then you can do AltGr+R+O for ®
15:17 semiosis :O
15:18 fracky joined #gluster
15:24 vpshastry left #gluster
15:26 P0w3r3d joined #gluster
15:27 andreask joined #gluster
15:28 * phox just Option-r
15:32 kkeithley_ I don't have an Option key on this keyboard.  Most PC keyboards have the AltGr key and I believe the default keymap for many non-US/English keyboard layouts already map the AltGr key to Multi_key. Many, many moons ago when I worked for the MIT X Consortium I fixed the broken implementation and made this stuff work.
15:37 bkrram joined #gluster
15:39 phox heh
15:39 * phox prefers the Apple keymappings here
15:39 bkrram What is the recommended way to replace a machine with some volumes with another machine for replicated and distributed volumes? There was a lot of back and forth on the replace brick recently so I'm not sure what the current recommendations are.
15:39 phox specifically the fact that they make compose care about uppercase/not leads to much more intuitive ways of composing stuff
15:39 phox IMNSHO a very good idea
15:42 Bluefoxicy joined #gluster
15:46 LoudNoises joined #gluster
15:47 davinder joined #gluster
15:48 P0w3r3d joined #gluster
15:55 kkeithley_ We're way off topic, but X's compose definitions can be changed. If you don't like that there are five combinations that yield ®   simply edit, e.g. /usr/share/X11/local/*/Compose and fix it.
15:57 Bluefoxicy left #gluster
16:00 anands joined #gluster
16:01 bkrram What is the recommended way to replace a machine with some volumes with another machine for replicated and distributed volumes? There was a lot of back and forth on the replace brick recently so I'm not sure what the current recommendations are.  :)
16:05 hagarth joined #gluster
16:08 georgeh|workstat joined #gluster
16:11 aliguori joined #gluster
16:21 SpeeR joined #gluster
16:21 Guest2461 joined #gluster
16:24 go2k hey there, anyone keen on helping with a georep issue: OSError: [Errno 95] Operation not supported ?
16:27 sprachgenerator joined #gluster
16:29 Mo_ joined #gluster
16:33 semiosis ,,(replace)
16:33 glusterbot Useful links for replacing a failed server... if replacement server has different hostname: http://goo.gl/nIS6z ... or if replacement server has same hostname:
16:33 glusterbot http://goo.gl/rem8L
16:33 semiosis bkrram:
16:36 jbrooks Is the .glusterfs folder that lives in your bricks supposed to be owned by root?
16:36 semiosis probably
16:41 johnmark jbrooks: pretty much all gluster ops require root, so yeah, I think so
16:41 jbrooks cool
17:07 semiosis bug 1018793 can be marked resolved.  i dont think i have the ability to do this myself.
17:07 glusterbot Bug http://goo.gl/lpMKtK medium, unspecified, ---, vbellur, NEW , Saucy packaging problem
17:08 giannello what if I cannot stop a volume?
17:08 hagarth semiosis: done
17:08 semiosis hagarth: thanks!
17:09 fracky Did anyone tried remove brick when client has still opened files on that brick? It always end up with sigsev for me.
17:09 bkrram joined #gluster
17:09 bkrram semiosis: what does one do to replace a server when it is still up.. maybe in anticipation of a disk failure?
17:10 an joined #gluster
17:11 semiosis bkrram: ideally you can just cut over the hostname from old to new IP then disconnect the old IP (shutdown, change IP, etc) to cause clients to reconnect, at which point they find the new server
17:11 hagarth fracky: do you have a backtrace?
17:11 semiosis i've done that successfully but always watch my clients closely, ready to unmount/remount if they don't automatically reconnect
17:12 fracky hagarth: (gdb) backtrace
17:12 fracky #0  pthread_spin_lock (lock=0xaaaaaac2) at ../nptl/sysdeps/x86_64/../i​386/pthread_spin_lock.c:35
17:12 fracky #1  0x00007ffff7b93d23 in fd_ref (fd=0x555555851e08) at fd.c:447
17:12 fracky #2  0x00007ffff53cd8d5 in fuse_write (this=0x5555557840b0, finh=0x7fffe802db60, msg=0x7ffff6406000) at fuse-bridge.c:2226
17:12 fracky #3  0x00007ffff53e2148 in fuse_thread_proc (data=0x5555557840b0) at fuse-bridge.c:4607
17:12 fracky #4  0x00007ffff7515b50 in start_thread (arg=<optimized out>) at pthread_create.c:304
17:12 fracky #5  0x00007ffff6e7ba7d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:112
17:12 fracky #6  0x0000000000000000 in ?? ()
17:12 semiosis fracky: use a pastebin site or ,,(paste) please
17:12 glusterbot fracky: For RPM based distros you can yum install fpaste, for debian and ubuntu it's pastebinit. Then you can easily pipe command output to [f] paste [binit] and it'll give you a URL.
17:12 Mo_ Is there  any info on how to re-configure gluster on the failed drive?
17:12 fracky Ok sorry my mistake
17:13 fracky http://pastebin.com/qje7YSVq
17:13 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
17:14 Mo_ Is this still the right way of re-configuring host that has now new drive? http://gluster.org/community/documen​tation/index.php/Gluster_3.2:_Brick_​Restoration_-_Replace_Crashed_Server
17:14 glusterbot <http://goo.gl/60uJV> (at gluster.org)
17:15 go2k can anyone help with OSError: [Errno 95] Operation not supported in georep ?
17:16 bkrram semiosis: Thanks a ton. Will one do the same with distributed volumes as well?
17:16 go2k the mountbroker thingy
17:17 hagarth fracky: which version are you using?
17:17 semiosis bkrram: well technically yes it should but without replication your clients will have trouble if they try accessing a file on an unreachable server
17:18 semiosis bkrram: safer thing to do would be to stop clients, do the cutover, then start clients again
17:18 semiosis bkrram: so you know they won't get errors
17:18 fracky hagarth: 3.4.1
17:19 bkrram semiosis: ah.. was hoping do so seamlessly but alas:-( Thanks!
17:22 bala joined #gluster
17:22 bkrram left #gluster
17:22 fracky hagarth: I tried found some bugreports about this and still nothing. So I wondered if it happens only to me...
17:23 fracky hagarth: valgrind said that I use unitialized memory in pthread_spin_lock.
17:26 glusterbot New news from newglusterbugs: [Bug 1010834] No error is reported when files are in extended-attribute split-brain state. <http://goo.gl/HlfVxX>
17:26 hagarth fracky: do you have links to those bug reports?
17:31 fracky hagarth: BUG 823404
17:31 glusterbot Bug http://goo.gl/exEqy high, medium, ---, sgowda, CLOSED WORKSFORME, I/O fails on the mount point while remove brick migrates data and committed
17:32 glusterbot New news from resolvedglusterbugs: [Bug 1018793] Saucy packaging problem <http://goo.gl/lpMKtK>
17:32 failshell joined #gluster
17:32 fracky hagarth: It is look similar. But it is already closed
17:33 hagarth fracky: doesn't look like a crash in there
17:34 DataBeaver joined #gluster
17:36 saltsa joined #gluster
17:38 go2k anyone can please help me with geo-replication mountbroker issue
17:38 go2k http://fpaste.org/46851/81844477/
17:38 glusterbot Title: #46851 Fedora Project Pastebin (at fpaste.org)
17:38 fracky hagarth: Ok so it is not so similar that I thought. And it make me little bit more confused. I don't think I do something wrong. I tried even recompile whole gluster and result is still same.
17:41 fracky hagarth: I dig in code a little bit. And I found that problematic piece of code. But I have no idea how it is supposed to be initialized in first place.
17:41 hagarth fracky: a mutex_lock hasn't been initialized?
17:45 fracky hagarth: I don't know. It is look like whole struct _fd comes from strange recast which I don't understand. I can only confirm that member inode in struct _fd in not initilized when it crash.
17:51 JonnyNomad joined #gluster
17:54 bala joined #gluster
17:58 hagarth fracky: can you please open a new bug for this one?
17:58 semiosis file a bug
17:58 glusterbot http://goo.gl/UUuCq
18:01 fracky hagarth: Ok, I fill it tomorrow when I generate some more logs. Thanks for your help.
18:02 hagarth fracky: thanks
18:24 cyberbootje joined #gluster
18:27 chirino joined #gluster
18:33 go2k Guys, anyone with monutbroker experience? I have troubles setting a non-privileged user to replicate over 2 servers
18:33 andreask joined #gluster
18:34 bennyturns joined #gluster
18:36 rotbeard joined #gluster
18:43 luisp joined #gluster
18:45 semiosis go2k: whats the trouble?
18:45 go2k I'm getting this
18:45 go2k http://fpaste.org/46851/81844477/
18:45 glusterbot Title: #46851 Fedora Project Pastebin (at fpaste.org)
18:45 DV__ joined #gluster
18:45 go2k when trying to replicate using non-privileged (non-root) user
18:45 zaitcev joined #gluster
18:47 go2k That's that mountbroker thingy
18:52 go2k it is officially on gluster and RHS pages
18:52 go2k https://access.redhat.com/site/document​ation/en-US/Red_Hat_Storage/2.0/html/Ad​ministration_Guide/chap-User_Guide-Geo_​Rep-Preparation-Settingup_Slave.html
18:52 glusterbot <http://goo.gl/zYz8qI> (at access.redhat.com)
18:52 go2k but noone knows anything about it
18:54 hagarth go2k: can you send out an email on gluster-users with a description of the mountbroker problem?
18:54 PatNarciso joined #gluster
18:54 go2k yeah sure will do... for now I'll have to reduce access to the root account I guess :)
19:11 andreask joined #gluster
19:14 mjrosenb a client is responsible for determining which brick a given file gets put on with dht, right?
19:17 semiosis a client, or the nfs server
19:18 kkeithley_ (because the NFS server is, in turn, a client of the glusterfs native (fuse) server)
19:26 andreask joined #gluster
19:33 andreask joined #gluster
19:36 an joined #gluster
19:39 jag3773 joined #gluster
19:40 kaptk2 joined #gluster
19:56 glusterbot New news from newglusterbugs: [Bug 1004519] SMB:smbd crashes while doing volume operations <http://goo.gl/DMsNHh>
19:57 giannello joined #gluster
20:01 mjrosenb semiosis: kkeithley_ ok, because I have two bricks, one with 100 gb free, and one with several tb free, and it seems like most things are being placed on the brick with 100 gb free :-(
20:06 JoeJulian @lucky dht misses are expensive
20:06 glusterbot JoeJulian: http://goo.gl/A3mCk
20:07 JoeJulian mjrosenb: That link explains how DHT chooses the subvolume.
20:08 semiosis mjrosenb: remember, gluster distributes files, not bytes
20:10 mjrosenb semiosis: yes.
20:11 mjrosenb JoeJulian: "getfattr -n trusted.glusterfs.dht -e hex */models/silly_places", where are these attributes stored in general?
20:12 mjrosenb like sholud I be running this on the clients or the bricks?
20:12 JoeJulian those are on the bricks.
20:12 mjrosenb I'm guessing it decided that memoryalpha had more space than memorybeta long long ago, and now the division lines are all wrong.
20:14 semiosis memoryalpha?  http://en.memory-alpha.org/wiki/Beard
20:14 glusterbot Title: Beard - Memory Alpha, the Star Trek Wiki (at en.memory-alpha.org)
20:14 mjrosenb JoeJulian: and what files should contain the trusted.glusterfs.dht attribute?
20:15 mjrosenb semiosis: that is where I took the name from :-p
20:15 semiosis nice
20:16 JoeJulian trusted.glusterfs.dht is on the directory that your files are placed within.
20:16 badone joined #gluster
20:16 JoeJulian It'll be different on each distribute subvolume
20:20 Alpinist joined #gluster
20:36 jcsp joined #gluster
20:37 MugginsM joined #gluster
20:45 rwheeler joined #gluster
20:58 Alpinist joined #gluster
20:58 P0w3r3d joined #gluster
20:58 mjrosenb JoeJulian: top level directory, or every individual directory?
20:58 mjrosenb oh it looks like it is every individual directory
20:58 mjrosenb fancy.
20:59 JoeJulian yep
20:59 bcipriano joined #gluster
21:01 bcipriano hello - i'm having an issue with a 3.4.1, two-brick distributed cluster, where my files disappear from the share after running a "remove-brick commit".
21:04 JoeJulian bcipriano: Sounds like expected behavior unless you removed the brick without "commit" and waited for "status" to show it's complete.
21:04 bcipriano i did that. i ran "remove-brick start" and waited for a complete, at which point all files showed fine via the share. after the commit, they were gone.
21:05 bcipriano i can see the files when i look at the brick directly, just not on the share.
21:05 StarBeast joined #gluster
21:06 mjrosenb getextattr -x user  trusted.glusterfs.dht /local/media/music/3/
21:06 mjrosenb /local/media/music/3/00 00 00 01 00 00 00 00 7f ffffffff ffffffff ffffffff ffffffff ffffffff ffffffff ffffffff
21:06 mjrosenb so, what are the implications of this? there are many more bytes there than there were in your example?
21:06 mjrosenb oh...
21:06 mjrosenb I bet they are all bytes, which are being sign extended to words.
21:07 mjrosenb so this looks like it is 0x7fffffff to 0xffffffff
21:07 mjrosenb which is about half of the possible values.
21:08 mjrosenb ahh, but another directory is 00 00 00 01 00 00 00 00 00 00 00 00 ffffffff ffffffff ffffffff ffffffff, which is everything?
21:08 JoeJulian Wow, I do take a lot of leaps in my explanation, don't I...
21:09 mjrosenb JoeJulian: both bricks have the same range for this directory, that sounds wrong.
21:09 JoeJulian yes, that's everything. Looks like an overlap. Should have messages stating as much in your client log.
21:12 mjrosenb [2013-09-13 14:22:38.616793] I [dht-layout.c:593:dht_layout_normalize] 0-magluster-dht: found anomalies in /tmp. holes=1 overlaps=0
21:12 bcipriano ah also, the files on the brick have strange permissions - "---------T 2 ubuntu ubuntu    0 Oct 15 20:47 10MB.01"
21:12 mjrosenb bcipriano: that looks like a link?
21:13 mjrosenb bcipriano: lsxattr should show that it has trusted.gluster.linkto (or something similar)
21:13 JoeJulian bcipriano: On a hunch.. try remounting the client...
21:14 JoeJulian or if you can't do that, just mount the volume again in some other directory and take a look
21:16 mjrosenb JoeJulian: grep -i overlap /var/log/glusterfs/data.log  | grep -v overlaps=0 is empty.  does this only occur on writes, or also reads?
21:19 bcipriano okay, I tried remounting on two different clients, no luck, still no files. mounts are via NFS
21:21 bcipriano tried a fuse mount as well just to be sure, and same results
21:24 bcipriano since the files are showing as zero size on the brick, is it possible the remove-brick didn't move the data correctly?
21:28 zerick joined #gluster
21:29 JoeJulian bcipriano: 0 size files with mode 1000 and an extended attribute, "trusted.dht.linkto" are pointers that should redirect the client to the correct brick. They can be deleted if you delete their .glusterfs counterpart.
21:29 JoeJulian bcipriano: http://joejulian.name/blog/what-is-​this-new-glusterfs-directory-in-33/
21:29 glusterbot <http://goo.gl/j981n> (at joejulian.name)
21:33 bcipriano those pointers are on the only remaining brick. where is the data in that case? or am I misunderstanding the way those work?
21:34 JoeJulian If you "getfattr -m trusted.dht.linkto -d $file" you can see which brick that pointer is pointing to.
21:35 bcipriano "getfattr -m trusted.dht.linkto -d /gluster/brick0002/folder/10MB.01" returns nothing, no output
21:40 ctria joined #gluster
21:41 bcipriano to be clear, that path above is the path on the brick, not on the share. the files don't appear in the share at all.
21:41 mjrosenb JoeJulian: oh, yeah, what is in .glusterfs?
21:41 mjrosenb JoeJulian: since that directory doesn't really work for me.
21:44 P0w3r3d joined #gluster
21:45 JoeJulian mjrosenb: huh? Doesn't "work" for you?
21:47 sprachgenerator joined #gluster
22:02 DV__ joined #gluster
22:13 bogen1 joined #gluster
22:15 bogen1 ok... very newbie question here... If I add in multiples of 2 (volume was set up with replica 2), when I remove in multiples of 2, is the data no longer accessible in the volume even if there was room to move the data to other bricks? I'm a bit vague as I'm not seen a definitive explaining that.
22:18 bogen1 brb...
22:18 bogen1 joined #gluster
22:20 giannello joined #gluster
22:21 StarBeas_ joined #gluster
22:25 Quantum` joined #gluster
22:25 Quantum` I'm reading that clustering is not recommended on Fedora: https://alteeve.ca/w/Two_Node_Fedora_13_Cluster
22:25 glusterbot Title: Two Node Fedora 13 Cluster - AN!Wiki (at alteeve.ca)
22:26 Quantum` I need to set up a high-performance cluster.  Can anyone recommend applications?
22:28 mcblady joined #gluster
22:32 Quantum` Wow, no one's here.
22:34 harish joined #gluster
22:35 bstromski joined #gluster
22:38 torrancew Quantum`: Some are, but I at least have no idea what you're talking about
22:38 Quantum` meh
22:39 torrancew For example, what kind of cluster?
22:40 Quantum` A high-performance one.
22:40 Quantum` (Condor?)
22:41 torrancew How does that relate to GlusterFS?
22:41 Quantum` This is the only cluster-related channel I can find.
22:41 torrancew Does condor have a channel itself?
22:42 Quantum` LOL that is a good question.
22:42 zerick joined #gluster
22:42 Quantum` Am I understanding that Gluster is pooled filesystem?
22:42 torrancew Yeah, in userspace. It supports striping, replication, distribution, geo-rep, and combinations of the same
22:44 torrancew So some users almost certainly have general HPC cluster knowledge, but many (like me) mainly just deal with Gluster, and then things that are redundant, but perhaps not true clusters, like replicated databases (or things that are clustered, but contained at the app level, like elasticsearch, etc)
22:44 torrancew I mean, "High performance cluster" is quite a vague term, really
22:44 torrancew What matters most is what you want to /do/ with your cluster. That an guide decisions on what type of solution better than whatever tool is popular this instant
22:45 bogen1 torrancew: do you know the answer to the question I asked earlier? If I add in multiples of 2 (volume was set up with replica 2), when I remove in multiples of 2, is the data no longer accessible in the volume even if there was room to move the data to other bricks? I'm a bit vague as I'm not seen a definitive explaining that.
22:46 torrancew bogen1: Hmmm, not sure there. I really never explicitly remove, and use gluster mainly for some cross-AZ replication of mission-critical stuff in AWS
22:51 JoeJulian bogen1: If you remove-brick, wait until status says it's complete, then commit, your data /should/ be available after the commit.
22:52 JoeJulian bogen1: But bcipriano did that with 3.4.1 and is saying that it didn't work that way for him. I haven't had a chance to see if I can duplicate his failure.
22:53 bcipriano indeed, yeah bogen1 make sure to test that out first as I found my data missing after I did my commit. still trying to track down what's going on.
22:56 bogen1 bcipriano: ok, thanks
22:56 bogen1 I'll test it
23:08 mcblady hey guys, i guess i need a help. i did upgrade from 3.2 to 3.3 and now all my bricks are offline, also i see a warning message about missing 'option transport-type'. defaulting to "socket", but if i check the vol file in the server part there is the option. any help. found this http://www.gluster.org/pipermail/glu​ster-users/2013-January/035130.html but not real answer there
23:08 glusterbot <http://goo.gl/PntEYq> (at www.gluster.org)
23:13 Quantum` left #gluster
23:25 nueces joined #gluster
23:31 wushudoin joined #gluster
23:48 RicardoSSP joined #gluster
23:49 bcipriano left #gluster
23:49 asias joined #gluster
23:53 jurrien_ joined #gluster
23:55 phillipo joined #gluster
23:57 glusterbot New news from newglusterbugs: [Bug 958781] KVM guest I/O errors with xfs backed gluster volumes <http://goo.gl/4Goa9>

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary