Camelia, the Perl 6 bug

IRC log for #gluster, 2013-09-30

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:04 glusterbot New news from newglusterbugs: [Bug 1002940] change in changelog-encoding <http://goo.gl/dmQAcW>
00:06 l0uis raar: glusterfs --version
00:13 raar glusterfs 2.0.9 built on Apr 11 2010 20:45:02
00:13 raar super old, right :\
00:13 l0uis lol
00:13 l0uis yes :)
00:13 l0uis we're at 3.4.1 now
00:16 raar gotcha, thanks :)
00:16 raar as I suspected
00:18 jskinner joined #gluster
00:27 RicardoSSP joined #gluster
00:27 RicardoSSP joined #gluster
00:43 lkthomas joined #gluster
00:45 raar and I guess using this super old version may attribute to super horrible performance issues, right?
01:01 sac`away joined #gluster
01:29 ProT-0-TypE joined #gluster
01:30 ProT-0-TypE joined #gluster
01:32 mohankumar joined #gluster
02:20 harish_ joined #gluster
02:41 CheRi joined #gluster
02:55 kshlm joined #gluster
02:58 harish joined #gluster
03:01 sgowda joined #gluster
03:18 lalatenduM joined #gluster
03:20 ababu joined #gluster
03:22 davinder joined #gluster
03:22 shubhendu joined #gluster
03:46 shylesh joined #gluster
04:02 shylesh joined #gluster
04:05 ppai joined #gluster
04:07 shyam joined #gluster
04:12 dusmant joined #gluster
04:18 ndarshan joined #gluster
04:38 kanagaraj joined #gluster
04:39 shapemaker joined #gluster
04:41 pdrakewe_ joined #gluster
04:44 dneary_ joined #gluster
04:44 atrius_ joined #gluster
04:44 duerF^ joined #gluster
04:44 wgao_ joined #gluster
04:44 harish_ joined #gluster
04:45 Rocky___2 joined #gluster
04:45 social joined #gluster
04:52 16WAA6I4Q joined #gluster
04:52 bstr_ joined #gluster
04:52 16WAA6I4Q joined #gluster
04:54 shruti joined #gluster
04:59 vpshastry joined #gluster
05:00 lalatenduM joined #gluster
05:01 16WAA6I4Q joined #gluster
05:01 bstr_ joined #gluster
05:21 sgowda joined #gluster
05:23 vpshastry joined #gluster
05:27 aravindavk joined #gluster
05:39 ProT-0-TypE joined #gluster
05:47 raghu joined #gluster
05:49 CheRi joined #gluster
05:54 micu1 joined #gluster
06:01 anands joined #gluster
06:06 nshaikh joined #gluster
06:10 psharma joined #gluster
06:11 satheesh joined #gluster
06:19 rgustafs joined #gluster
06:22 mohankumar joined #gluster
06:22 jtux joined #gluster
06:22 mohankumar joined #gluster
06:27 harish_ joined #gluster
06:29 vimal joined #gluster
06:35 kPb_in joined #gluster
06:41 davinder joined #gluster
06:49 polfilm joined #gluster
06:51 glusterbot New news from resolvedglusterbugs: [Bug 949406] Rebalance fails on all the nodes when glusterd is down on one of the nodes in the cluster <http://goo.gl/Q8dyW>
06:55 ctria joined #gluster
06:58 vshankar joined #gluster
07:00 jtux joined #gluster
07:01 jporterfield joined #gluster
07:01 eseyman joined #gluster
07:02 ngoswami joined #gluster
07:05 spandit joined #gluster
07:09 ricky-ticky joined #gluster
07:11 ekuric joined #gluster
07:20 hybrid512 joined #gluster
07:21 lalatenduM joined #gluster
07:25 andreask joined #gluster
07:35 shireesh joined #gluster
07:37 Staples84 joined #gluster
07:42 vpshastry joined #gluster
07:46 mgebbe_ joined #gluster
07:47 shireesh joined #gluster
07:51 harish_ joined #gluster
08:20 shireesh joined #gluster
08:22 vpshastry joined #gluster
08:25 vpshastry left #gluster
08:31 kshlm joined #gluster
08:44 morse joined #gluster
08:44 manu joined #gluster
08:45 psharma joined #gluster
08:45 bstr_ joined #gluster
08:59 mbukatov joined #gluster
09:02 nshaikh joined #gluster
09:11 pdrakeweb joined #gluster
09:17 DV joined #gluster
09:25 jskinner joined #gluster
09:27 CheRi| joined #gluster
09:38 CheRi joined #gluster
09:46 TDJACR joined #gluster
09:47 spandit joined #gluster
09:50 RameshN joined #gluster
09:52 keytab joined #gluster
09:53 keytab Hello
09:53 glusterbot keytab: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
09:54 manik joined #gluster
09:55 StarBeast joined #gluster
09:56 keytab I have a question about gluster. I need shared folder in 4 client, currenly I do with NFS, but if the server-nfs will down. My NFS stop. I would like have cluster of file share
09:57 keytab it is possible with gluster?
09:58 NuxRo keytab: yes
09:58 keytab Do i use NFS o i should change the mettod for sharing?
10:02 StarBeast joined #gluster
10:03 ndarshan joined #gluster
10:03 shireesh joined #gluster
10:12 andreask keytab: with a native gluster mount the clients will detect the dead server and use the next one
10:15 lalatenduM joined #gluster
10:15 satheesh3 joined #gluster
10:20 keytab andreask: I found this page: http://gluster.org/community/documentation​/index.php/Gluster_3.2_Native_Client_Guide thank you very much
10:20 glusterbot <http://goo.gl/oS4Tt5> (at gluster.org)
10:22 andreask yw
10:29 kanagaraj joined #gluster
10:31 davinder joined #gluster
10:32 eseyman joined #gluster
10:39 lkthomas joined #gluster
10:40 satheesh joined #gluster
10:41 shyam left #gluster
10:41 anands joined #gluster
10:44 lalatenduM joined #gluster
10:46 DV joined #gluster
10:50 vpshastry joined #gluster
10:58 vpshastry left #gluster
11:02 davinder joined #gluster
11:03 DV joined #gluster
11:08 failshell joined #gluster
11:17 shireesh joined #gluster
11:18 mohankumar joined #gluster
11:23 rwheeler joined #gluster
11:36 lalatenduM joined #gluster
11:43 bala joined #gluster
11:47 andreask joined #gluster
12:00 manik joined #gluster
12:02 anands joined #gluster
12:07 ndarshan joined #gluster
12:17 dusmant joined #gluster
12:19 aravindavk joined #gluster
12:20 ctria joined #gluster
12:22 failshell joined #gluster
12:23 shubhendu joined #gluster
12:24 kanagaraj joined #gluster
12:25 jag3773 joined #gluster
12:28 bala joined #gluster
12:31 bstr_ joined #gluster
12:38 vshankar joined #gluster
12:44 B21956 joined #gluster
12:53 rgustafs joined #gluster
12:54 B21956 joined #gluster
12:54 jclift joined #gluster
13:12 harish_ joined #gluster
13:13 glusterbot New news from newglusterbugs: [Bug 986775] file snapshotting support <http://goo.gl/ozgmO>
13:13 DV__ joined #gluster
13:24 ctria joined #gluster
13:24 jclift joined #gluster
13:25 vpshastry1 joined #gluster
13:29 chirino joined #gluster
13:30 B21956 joined #gluster
13:31 ndk joined #gluster
13:32 vpshastry1 left #gluster
13:41 vshankar joined #gluster
13:53 recidive joined #gluster
13:55 dbruhn joined #gluster
13:59 kopke joined #gluster
14:00 bugs_ joined #gluster
14:01 jclift left #gluster
14:01 jclift joined #gluster
14:01 kopke hi all, got some trouble configuring debian unstable gluster/3.4, I can't set ipv6 address, tried ipv6 address or hostname having only ipv6, both failed. Got that error:
14:01 kopke [2013-09-30 13:57:20.362730] E [common-utils.c:211:gf_resolve_ip6] 0-resolver: getaddrinfo failed (System error)
14:01 kopke [2013-09-30 13:57:20.362782] E [name.c:249:af_inet_client_get_remote_sockaddr] 0-glusterfs: DNS resolution failed on host ipv6.google.fr
14:02 jclift kopke: Gluster doesn't fully support IPv6 yet, so the servers you're using Gluster for need to be resolvable with IPv4.
14:02 kopke bad news, I'm  migrating all my servers in ipv6 :/
14:03 jclift kopke: Ouch. :(
14:03 kkeithley Doctor, it hurts when I go like this
14:03 jclift kopke: Maybe ask on gluster-users mailing list or even gluster-devel, if the IPv6 problem has been fixed yet. ;)
14:03 jclift kopke: I'm not sure where it is in the priority list...
14:03 jclift kopke: And maybe someone's already got to it.
14:04 kopke ok, will do in the next hours
14:04 jclift :)
14:04 kopke tks :)
14:05 rwheeler joined #gluster
14:06 jclift joined #gluster
14:09 wushudoin joined #gluster
14:12 navid__ joined #gluster
14:19 mtanner_ joined #gluster
14:19 harish_ joined #gluster
14:22 diegows_ joined #gluster
14:31 dusmant joined #gluster
14:39 shyam joined #gluster
14:43 jporterfield left #gluster
14:44 shyam left #gluster
14:48 pea_brain joined #gluster
14:50 Dga joined #gluster
14:50 pea_brain hi all, i have heard that split brain conditions can lead to issues. i am planning to use v3.4.1 - how can i simulate a split brain condition ? how do i simulate a split brain recovery. any help would be valuable. i would like to try it out before putting it into production.
14:57 dbruhn If you turn quorum on it should resolve split brain issues for you
14:57 dbruhn to simulate the issue set up your system with replication, start copying to the system and take one of the replication pairs out
14:58 dbruhn or go to the subsystem and modify a file directly
14:58 dbruhn run an LS from the mount point side
14:59 dbruhn and it should have the error, also the healing will make it show up
14:59 dbruhn not sure if the process to repair a split brain issue is the same in 3.4.1 I am still on 3.3.1
14:59 dbruhn jclift, are we getting RDMA fixes in 3.4.2?
15:03 pea_brain dbruhn: thanks. will try it out.
15:06 phox joined #gluster
15:06 phox eh
15:07 phox so I upgraded 3.3.1 -> 3.4.0 and now apparently it can't find the server brick ports
15:07 phox all volumes on the server show brick PIDs, but no port (says N/A)
15:07 phox ideas?
15:08 phox ah
15:08 phox [2013-09-30 15:05:03.306061] W [rpc-transport.c:175:rpc_transport_load] 0-rpc-transport: missing 'option transport-type'. defaulting to "socket"
15:08 phox so I guess I set that?
15:08 phox ah so this is now a post-configurable option
15:08 phox probably.
15:09 manik2 joined #gluster
15:10 kaptk2 joined #gluster
15:11 jclift dbruhn: Probably not.  I haven't had time to sink my teeth into this stuff to drive it.
15:12 jclift dbruhn: But, I might be able to soon.  Finishing up a few other things, and I still do want to get the RDMA stuff working properly again.
15:12 kkeithley I'm reasonably certain that no rdma-related changes were made for 3.4.1
15:12 kkeithley Mellanox is working with us to whip our rdma implementation into shape
15:13 dbruhn last I heard was to not expect anything till 3.4.2
15:13 dbruhn is mellanox going to provide coding support?
15:14 phox anyone re: my glusterd being a glusturd and not listening on ports for bricks?
15:14 dbruhn is iptables or something blocking it?
15:14 dbruhn what distro?
15:14 phox no.  read above.
15:14 phox [2013-09-30 15:10:51.006648] W [rpc-transport.c:175:rpc_transport_load] 0-rpc-transport: missing 'option transport-type'. defaulting to "socket"
15:14 phox buncha that
15:14 phox set it explicitly on one volume; it claimed success
15:15 kkeithley missing option transport time means you didn't specify "transport tcp" when you created your brick, I believe.
15:15 phox kkeithley: I specified tcp,rdma with v3.3.1
15:15 phox they all show 'Transport-type: tcp,rdma'
15:16 tryggvil joined #gluster
15:16 kkeithley then I dunno
15:16 phox including the one I'd previously set to 'tcp' 'success[fully]'
15:16 dbruhn which log are you getting that out of?
15:16 phox also I could not set it explicitly to tcp,rdma with the new version
15:16 phox cli.log
15:16 phox apparently
15:17 phox 'volume status' shows no ports listening for any bricks
15:17 phox hm.
15:17 phox huh.  weird... *looks more*
15:19 phox so setting 'transport' has turned into having nfs.transport-type set
15:19 phox lovely broken-ass code :|
15:19 phox this is what's called "aliasing"
15:20 phox so what would anyone here do to try to get such a system back up and running
15:20 phox right now I have data that should be available to users not available =/
15:22 phox anyone? =/
15:24 Peanut semiosis: if I apt-get upgrade my Ubuntu cluster, it will update the glusterfs packages. How 'risky' is this when doing one machine after the other? Is this 3.4.1, or a patch to 3.4?
15:24 kkeithley In 3.4.x glusterfsd listens on unreserved ports, i.e. > 1024. E.g. on mine, the glusterfsd is listening on 49152
15:25 phox kkeithley: yeah, my clean install is doing that.  it's just broken on the one I upgraded :l
15:25 kkeithley My `gluster volume status shows:
15:25 kkeithley Gluster processPortOnlinePid
15:25 kkeithley Brick f19node1:/var/tmp/bricks/volx49152Y30848
15:25 phox yeah I have a Brick atlas-ib:/mnt/gluster-roots/projectsN/AY4936
15:26 phox so not so much with having a port
15:26 kkeithley weird
15:26 phox is there any way to -try- telling it to change the brick transport type after creation?
15:26 phox right now I'm considering attempting to sketchy-recreate the volume with the same brick in-situ
15:27 phox which I know is not recommended, but unless there's another angle to bashing this thing into working, well...
15:27 kkeithley not seeing such an option
15:27 phox it did let me set "transport-type"
15:27 phox and bitched when I tried to set it to somethign with a comma in it, so I was assuming that meant it was trying to use the value for something
15:29 kkeithley there's nfs.transport-type, tcp and rdma. But I suppose you already knew about that
15:29 phox yeah
15:30 phox so somehow the stored options for 3.3.1 vs 3.4.0 are incompatible, I guess.
15:30 kkeithley that's not supposed to be the case, but you seem to have found a corner case
15:30 phox apparently
15:30 phox although I don't believe I've done anything particularly special
15:30 phox I have about 2 options set
15:31 phox auth.allow and performance.read_ahead or whatever it is
15:31 sprachgenerator joined #gluster
15:31 Technicool joined #gluster
15:32 phox heh, that's brilliant
15:33 phox on one of these filesystems .glusterfs is larger than the data on the brick
15:34 phox wth is 'device vg' o.O
15:34 phox oh, and does 3.4.0 -have- RDMA ?
15:34 phox and is it to be semi-trusted?
15:34 phox I didn't catch whatever led up to people talking about it a few minutes ago
15:35 zerick joined #gluster
15:35 kkeithley yes, 3.4.x has rdma
15:36 kkeithley I've used rdma from the head of the tree. It was a while ago, circa 3.4.0 timeframe. It worked then
15:37 badone joined #gluster
15:37 phox volume create: projects: failed: /mnt/gluster-roots/projects or a prefix of it is already part of a volume  <- no it's not and I deleted its /.glusterfs dir :|
15:37 glusterbot phox: To clear that error, follow the instructions at http://goo.gl/YUzrh or see this bug http://goo.gl/YZi8Y
15:37 * phox looks
15:38 phox ah
15:39 phox gah, anyone know what package typically provides setfattr?
15:39 phox heh
15:39 phox hm, attr, probably, ok
15:40 kkeithley attr on RHEL,Fedora,CentOS
15:40 * phox nods
15:40 sprachgenerator joined #gluster
15:40 phox same on deb
15:41 phox yay!  ... volume start: projects: failed: Commit failed on localhost. Please check the log file for more details.
15:44 phox seems to start as tcp
15:44 phox so I would assume that's what was going sideways from the start, because as of creating this volume I have more '0-rpc-transport: missing 'option transport-type'. defaulting to "socket"'
15:44 phox br0ken
15:47 phox also 'gluster volume stop whatever force' seems broken
15:47 phox i.e. it still gets stupid and asks me for input
15:47 phox yay!
15:47 * phox hits it with yes
15:49 shyam joined #gluster
15:49 shyam left #gluster
15:50 LoudNoises joined #gluster
15:50 phox I hate fixing things with a script iwth a chainsaw duct-taped to the front of it :|
15:50 phox this is quite seriously not a pretty way to do any of this
15:50 phox s/(do)/have to $1/
15:50 glusterbot What phox meant to say was: oh, and have to $1es 3.4.0 -have- RDMA ?
15:51 phox right, sure, pick a -random- message to apply that to, bot...
15:52 phox OTOH the brick on my test 3.4.0 install IS set to tcp,tdma
15:52 phox *rdma
15:52 phox anyways, looks like I can get this POS back on its feet =/
15:52 phox probably.
15:55 jruggiero joined #gluster
15:55 phox too much shit in /.glusterfs/ -> this is taking a while :l
15:55 dtyarnell joined #gluster
16:02 vpshastry joined #gluster
16:02 quique joined #gluster
16:02 vpshastry left #gluster
16:02 pea_brain left #gluster
16:02 quique is there a Brick Restoration - Replace Crashed Server doc for 3.4?
16:03 mohankumar joined #gluster
16:03 * phox wonders how his stupid servers rediscovered one another again
16:03 phox =/
16:04 torrancew quique: ya, same as the 3.2, just change the 3.2 to 3.4 in the URL
16:04 phox I'm going to guess that the client somehow does this magically or something :l
16:06 phox so is there any way to script volume stop and volume delete _without_ using yes?
16:06 phox or echo or wahtever.  no being a smartass here.
16:06 adrien_ewi joined #gluster
16:06 adrien_ewi Hello everybody :)
16:07 adrien_ewi J'imagine que vous ne parlez pas Français ?
16:10 adrien_ewi :/
16:11 phox je parle un peu =/
16:11 phox just suis Canadien
16:12 phox er.
16:12 phox les mains...
16:12 phox =/
16:12 phox *je
16:12 adrien_ewi Cool :). Ma question est simple : Si on veut accéder aux données depuis les "brick", est-on obligé de passer par le client GlusterFS ?
16:13 adrien_ewi Quand je ne monte pas les volumes, je n'ai pas de synchronisation.
16:14 phox on peut utiliser un "brick" par glusterfs-client ou par un client NFS
16:15 kaptk2 joined #gluster
16:15 adrien_ewi Donc la seule soution est de monté le volume partagé sur le serveur en lui même si on veut accéder aux données .
16:15 adrien_ewi ?
16:17 jclift joined #gluster
16:18 phox mais si vous ne voulez pas modifier les données (?), vous pouvez utiliser la "brique" directement si l'on n'utilise pas "replica"s
16:18 jclift left #gluster
16:19 phox mais SEULEMENT si vous ne modifiez pas les données
16:20 JoeJulian If I'm translating that right, the answer is: Always access your volume through a mountpoint, either FUSE or NFS, but never through the bricks directly.
16:21 phox JoeJulian: although if you don't butt heads with it there's not a lot wrong with _reading_ from the brick directly IF you're not using replication
16:21 phox generally it should be unnecessary, but say you're in my situation and got burned by broken gluster code :)
16:21 JoeJulian Funny thing about rules.... if you know what you're doing you can often break them. ;)
16:21 jclift joined #gluster
16:22 jskinner joined #gluster
16:22 phox I'm creating volumes right now with pre-populated bricks :l
16:22 adrien_ewi J'ai installé GlusterFS sur 3 serveurs qui répliquent entre eux un partage (/var/www/glusterfs/). Ce dossier est ensuite monté avec la commande OpenVZ "BIND" sur des machines virtuelles
16:22 phox because there's not a lot else I can do because 3.4.0 is broken
16:22 jclift left #gluster
16:23 JoeJulian I've heard of problems with OpenVZ and fuse... There are specific options that you'll need to set (and I don't know what they are).
16:23 kkeithley phox: You can use -script or --script to skip the yes/no responses
16:23 phox kkeithley: ah.  not very mentioned in the online help, but ok cool
16:23 JoeJulian Can we just do this in english. It's the "accepted" international support language and since this channel is logged, I'd like this information to be searchable in english.
16:24 phox adrien_ewi: http://translate.google.com :)
16:24 glusterbot Title: Google Translate (at translate.google.com)
16:24 phox then you can ask everyone :)
16:24 phox I'm sure we can mostly figure out any discrepancies.  better than my poor French at least.
16:24 adrien_ewi Ahaha !! http://french_translate.google.com :)
16:24 adrien_ewi I use "BIND" for share in OpenVZ
16:25 phox adrien_ewi: this is probably a question for OpenVZ people, regarding using a "FUSE" filesystem
16:25 kkeithley or --mode=script
16:25 phox kkeithley: k
16:25 zaitcev joined #gluster
16:25 phox I'll probably use --script
16:25 phox removing all of my ~/.glusterfs/ dirs is taking a while :l
16:26 phox *grumble*
16:26 adrien_ewi I use this ===> http://openvz.org/Bind_mounts
16:26 glusterbot Title: Bind mounts - OpenVZ Linux Containers Wiki (at openvz.org)
16:26 * kkeithley isn't sure which one is right
16:26 phox kkeithley: I'll check it out
16:26 phox might have to do more of thos
16:26 phox kkeithley: informative documentation always helps there :P
16:27 phox kkeithley: do you recall if that goes out the end of the command or elsewhere?
16:27 JoeJulian Switches /should/ be able to go anywhere, but I always put them first.
16:28 kkeithley IIRC it'll work at the end
16:28 phox ok
16:28 adrien_ewi To access the data I have to go through a client ?????????????????
16:29 kkeithley examples from some of the .../tests/bugs/*.t include `gluster volume stop --mode=script $vol`
16:29 JoeJulian adrien_ewi: You should, unless you know when it's acceptable not to.
16:30 Mo_ joined #gluster
16:31 adrien_ewi JoeJulian : I dont speak English verry well :/
16:31 JoeJulian adrien_ewi: Some people that speak English natively don't speak it very well.
16:32 adrien_ewi JoeJulian : :/
16:32 adrien_ewi hybrid512 is French !!
16:33 phox "was" ;)
16:33 phox for purposes of Freenode, at least.
16:33 adrien_ewi yes :/
16:36 compbio JoeJulian: wolud it be possible to update the top hit for the gluster chat archive to point to the real chat archive? I always have trouble finding it
16:36 compbio namely: http://www.gluster.org/interact/chat-archives/
16:36 glusterbot Title: Chat Archives | Gluster Community Website (at www.gluster.org)
16:36 JoeJulian phox: That "volume status" issue you were mentioning earlier, I don't suppose you tried 3.4.1 to see if it still had that problem?
16:37 adrien_ewi o_0
16:37 phox JoeJulian: no, that would take me rolling my own; going from semiosis' wheezy debs right now...
16:37 JoeJulian /topic shows "Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/" which are correct.
16:37 glusterbot Title: Logs for #gluster | BotBot.me [o__o] (at botbot.me)
16:37 phox JoeJulian: it's also being stupid and refusing to start volumes I create as RDMA, so...
16:38 phox there are definitely still some kinda-related bits of gimpy that I can reproduce
16:38 semiosis :O
16:38 JoeJulian rolling your own for what distro/build?
16:39 phox deb
16:39 phox (as implied by the latter bit of that message :P)
16:39 JoeJulian "From" deb... doesn't mention "to".
16:40 phox JoeJulian: "to installed"
16:40 phox ;)
16:40 phox going from debs to some other package format would be a bit weird.
16:40 semiosis @latest
16:40 glusterbot semiosis: The latest version is available at http://goo.gl/zO0Fa . There is a .repo file for yum or see @ppa for ubuntu.
16:40 adrien_ewi C'est une secte ici, vous ne repondez aux Anglophones !
16:40 compbio JoeJulian: sorry, do you know who's in charge of the gluster.org page? (I mistakenly thought it was you) it's pretty much impossible to find the logs via web
16:40 JoeJulian Now I'm more confused... Why do you have to "roll your own" to instal from the ppa?
16:40 adrien_ewi :)
16:40 semiosis phox: debs of 3.4.1 are up, as of friday: http://download.gluster.org/pub/gl​uster/glusterfs/3.4/3.4.1/Debian/
16:40 glusterbot <http://goo.gl/3aVtiv> (at download.gluster.org)
16:41 phox semiosis: oh, then that's probably what I'm running
16:41 phox I apparently didn't look
16:41 * phox checks
16:41 semiosis well it's a new repo
16:41 JoeJulian compbio: Ah, it's not me but that's a good point. Look for a major overhaul VSN.
16:41 phox ah
16:41 phox yeah I'm running your 3.4.0-2
16:41 phox so this new repo will be the persistent one?
16:41 semiosis all debian packages of glusterfs 3.4.1 will go there :)
16:42 phox semiosis: and later 3.4.x under parallel dirs?
16:42 semiosis yes, as is tradition for d.g.o
16:42 semiosis s/tradition/convention/
16:42 glusterbot What semiosis meant to say was: yes, as is convention for d.g.o
16:42 phox oh.  yeah, I forgot that the point version is hardcoded int he repo path :P
16:43 JoeJulian Can't you use "LATEST" in the path for the ppa?
16:44 semiosis well you *can* but when 3.5.0 comes out that might cause trouble
16:44 semiosis or if a compatibility breaking change is released in 3.4.n+1
16:44 phox syncing and trying 3.4.1
16:44 JoeJulian http://download.gluster.org/pub/gl​uster/glusterfs/3.4/LATEST/Debian/
16:44 glusterbot <http://goo.gl/TmmIeG> (at download.gluster.org)
16:44 badone joined #gluster
16:45 semiosis JoeJulian: ooh!
16:47 phox weird, stupid apt
16:47 phox claims to have 3.4.1 but doesn't want to install it because it's "already the latest version"
16:47 phox no it's not, apt.
16:48 * JoeJulian needs to blog about the one thing in Windows 8 he really can't stand: stupid useless password complexity requirements. It's entropy, people, not complexity!
16:49 phox JoeJulian: did they make their own cleanroom implementation, called 'craplib'?
16:49 phox probably tied it into something called 'facePAM'
16:50 JoeJulian They're probably using an NSA algorythm.
16:50 phox Not Secure Atall? :P
16:51 kkeithley semiosis: what do want to discuss wrt Ubuntu pkg stuff in, e.g., .../extras/UbuntuDEB?
16:58 neofob left #gluster
17:01 phox yay now my crap is "back up"
17:05 khushildep_ joined #gluster
17:24 tryggvil joined #gluster
17:32 polfilm joined #gluster
17:36 vpshastry joined #gluster
17:36 lpabon joined #gluster
17:36 vpshastry left #gluster
17:36 lalatenduM joined #gluster
17:37 kkeithley @yum
17:37 glusterbot kkeithley: The official community glusterfs packages for RHEL (including CentOS, SL, etc), Fedora 17 and earlier, and Fedora 18 arm/armhfp are available at http://goo.gl/s077x. The official community glusterfs packages for Fedora 18 and later are in the Fedora yum updates repository.
17:38 kkeithley @forget yum
17:38 glusterbot kkeithley: The operation succeeded.
17:40 kkeithley @learn yum as The official community glusterfs packages for RHEL (including CentOS, SL, etc) are available at http://download.gluster.org/pub/gluster/glusterfs/. The official community glusterfs packages for Fedora 18 and later are in the Fedora yum updates (or updates-testing) repository.
17:40 glusterbot kkeithley: The operation succeeded.
17:41 phox yeah, boooo shortened URLs
17:42 kkeithley @yum
17:42 glusterbot kkeithley: The official community glusterfs packages for RHEL (including CentOS, SL, etc) are available at http://goo.gl/42wTd5 The official community glusterfs packages for Fedora 18 and later are in the Fedora yum updates (or updates-testing) repository.
17:43 JoeJulian glusterbot's going to shorten the urls in all factoids. Deal with it. ;)
17:43 semiosis afaict the shortened URLs are a hit with people using irc in a terminal
17:44 semiosis the tmux/irssi crowd maybe?
17:44 phox ctcp version me.
17:44 phox narf.
17:46 JoeJulian /ctcp #gluster smell progress
17:49 kkeithley I understand why people like short urls in, e.g. for twitter, and maybe people doing IRC on their smartphones. Beyond that they're not much better — in my book — than QR codes.
17:49 phox is that available as an iSmell?  I'm an Apple user (:
17:50 phox they're annoying, because if I'm not on IRC for example I can't go "oh yeah the URL had these strings in it so now I can kick Google and hope it knows what I'm talking about"
17:50 kkeithley Nothing like getting sent to a malware site or a pron site by a malicious QR code
17:50 phox ahahahaha
17:50 JoeJulian I like my Rick Roll qr code T-shirt.
17:52 phox nice
17:52 kkeithley those are funny like a little kid telling you the same knock-knock joke 500 times. ;-)
17:52 phox somewhere sell those, or custom?
17:52 JoeJulian custom
17:52 * phox likes his 'save buffersnkill emacs' bumper sticker
17:53 phox some CS students claiming to be Vim users pulled up next to me and claimed they got it, but they didn't =/
17:53 JoeJulian The problem with long urls in factoids is there are line length limits in IRC and I don't want factoids rolling over into multiple lines if I can avoid it.
17:54 phox limit's a fair bit higher than the factoid length, though
17:54 JoeJulian glusterbot already runs the thin line between accepted and spammy.
17:54 phox it's also annoying when e.g. it applies a regexp to something I said about 50 messages ago.
17:54 JoeJulian I've had to reword some factoids to stay within the line length.
17:54 phox that's crossing from spammy to spazzy.
17:54 kkeithley I'd like it better if the factoid was something like .... http://goo.gl/abcdef/ (download.gluster.org) ...
17:55 phox if it's not https perhaps drop the http
17:55 phox URI is cute and all but we can all handle going to a website without someone telling us the protocol
17:56 JoeJulian breaks some clients.
17:56 phox those clients should fix themselves :)
17:56 phox or C&P
17:56 JoeJulian But sure, if you do that in a factoid it won't get shortened.
17:56 phox it'll also save 7 characters
17:56 JoeJulian I'm too lazy for C&P.
17:56 phox I can open addresses e.g. download.gluster.org without C&P
17:57 FilipeCifali joined #gluster
17:58 JoeJulian Start up a vote somewhere and see what the consensus is. I'm not completely against changing it, just mostly.
17:58 semiosis something like this maybe?  ,,(ports)
17:58 glusterbot glusterd's management port is 24007/tcp and 24008/tcp if you use rdma. Bricks (glusterfsd) use 24009 & up for <3.4 and 49152 & up for 3.4. (Deleted volumes do not reset this counter.) Additionally it will listen on 38465-38467/tcp for nfs, also 38468 for NLM since 3.3.0. NFS also depends on rpcbind/portmap on port 111 and 2049 since 3.4. See also: http://goo.gl/pq9Vru (gluster.org)
17:58 phox does your client handle www.gluster.org or download.gluster.org ? :P
17:58 rwheeler joined #gluster
17:59 JoeJulian If you're asking me in order to support your argument, then you're in effect stating that it's all about me. If it's all about me, I keep the short urls.
18:00 FilipeCifali hello guys
18:01 kkeithley semiosis: yes, but if g'bot did it automatically instead of me having to do it when I teach it the factoid would be nice.
18:02 semiosis kkeithley: obviously you didnt click the link ;)
18:04 * kkeithley thinks if I got fired after being sent to a pron site from a company computer after you _assured_ me it was gluster.org, we'd be having a different conversation.
18:04 JoeJulian Hmm. FilipeCifali is right... it does appear to be all guys. We need more women in the industry.
18:06 JoeJulian johnmark, get on that.
18:06 FilipeCifali ROFL
18:06 FilipeCifali so, if I take 2 brick down (like if it happens a kernel panic) should my data be "hanging" to be accessed?
18:07 rotbeard joined #gluster
18:07 FilipeCifali 2 bricks from a 4 bricks volume
18:07 JoeJulian depends on a number of things. For the first 45 seconds, absolutely.
18:07 semiosis kkeithley: even if glusterbot automatically snarfed a domain or even page title and added it to the shortened url those could be spoofed
18:07 semiosis they can be spoofed even for longer urls
18:08 kkeithley we're hiring. Know any good female candidates?
18:08 JoeJulian FilipeCifali: After that initial 45 seconds, if they're each part of replica subvolumes, then everything should start working again.
18:08 semiosis kkeithley: dont click links in channels unless they're logged, so you could prove you didnt go to any NSFW sites knowingly :)
18:09 FilipeCifali hmm, is this in the docs somewhere so I can have more info about it before asking?
18:10 JoeJulian http://checkshorturl.com/expand.php
18:10 glusterbot Title: CheckShortURL - Unshorten tiny links from hundreds of URL shortening services (at checkshorturl.com)
18:12 JoeJulian http://longurl.org seems even more useful.
18:12 glusterbot Title: LongURL | The Universal Way to Expand Shortened URLs (at longurl.org)
18:12 davinder joined #gluster
18:13 JoeJulian Lol... http://www.shadyurl.com
18:13 glusterbot Title: ShadyURL - Don't just shorten your URL, make it suspicious and frightening. (at www.shadyurl.com)
18:13 JoeJulian download.gluster.org became http://5z8.info/openme.exe_z9o4ih​_animated-gifs-of-train-accidents
18:13 glusterbot <http://goo.gl/nqqHsQ> (at 5z8.info)
18:14 FilipeCifali HAHA
18:14 bstromski joined #gluster
18:17 JoeJulian FilipeCifali: It's the ,,(ping-timeout)
18:17 glusterbot FilipeCifali: The reason for the long (42 second) ping-timeout is because re-establishing fd's and locks can be a very expensive operation. Allowing a longer time to reestablish connections is logical, unless you have servers that frequently die.
18:18 FilipeCifali Oh I got it, if I have these servers that doesn't die so frequently I should care and just do the normal peer detach right?
18:19 JoeJulian If you shut down a server normally, the tcp connection will be closed properly and there will be no ping-timeout.
18:19 JoeJulian You won't even need to detach (unless you're removing that server from the trusted pool).
18:19 FilipeCifali oh even better, then I just need to fear kernel panics :)
18:19 JoeJulian kernel panics and newbs tripping over network cords.
18:20 FilipeCifali oh yeah, but that guy must cut like 4 racks to make this setup totally offline :D
18:25 FilipeCifali you guys put glusterd in runlevel or after local?
18:25 FilipeCifali should I have any need to put a script (in crontab) to verify this service running?
18:28 glusterbot New news from resolvedglusterbugs: [Bug 862082] build cleanup <http://goo.gl/pzQv9M>
18:30 badone joined #gluster
18:47 jbrooks joined #gluster
18:50 kPb_in_ joined #gluster
18:52 NeatBasis joined #gluster
18:54 B21956 joined #gluster
19:04 Technicool joined #gluster
19:05 MinhP left #gluster
19:08 Gilbs joined #gluster
19:14 Gilbs I'm adding a peer to a new gluster setup -- When I do a peer probe/status, I can see the peer joined using the FQDN.  When I do a peer status on the server I just joined it shows the other server via IP instead of the FQDN.  The same happens when I reverse the peer probe order.   Servers are in DNS, /etc/hosts file has the DNS entry as well.  Any ideas?      gluster 3.4.0, centos 6.4
19:17 jiqiren Gilbs: it shouldn't matter
19:18 jiqiren Gilbs: as long as those entries in /etc/hosts are really there
19:21 JoeJulian @hostnames
19:21 glusterbot JoeJulian: Hostnames can be used instead of IPs for server (peer) addresses. To update an existing peer's address from IP to hostname, just probe it by name from any other peer. When creating a new pool, probe all other servers by name from the first, then probe the first by name from just one of the others.
19:22 badone joined #gluster
19:26 Gilbs ah
19:29 Gilbs Thank worked, thank you.
19:30 Gilbs I owe JoeJulian close to 15 cases of beer at this point -- put this one on my tab as well.  :)
19:41 phox so, my brain no longer works after everything that went TU this morning... can anyone comment on:  0-rpc-transport/rdma: rdma_cm event channel creation failed (No such file or directory)
19:41 phox I presume it tried to open something in /dev/ but nfi what
19:46 * phox pokes kkeithley :)
20:01 NeatBasis joined #gluster
20:15 Gilbs left #gluster
20:16 badone joined #gluster
20:26 jruggiero left #gluster
20:42 StarBeast joined #gluster
20:53 phox anyone?  bueller?
20:54 ndk joined #gluster
20:59 a2 your rdma drivers are probably not loaded
21:07 phox they are.
21:07 phox HCA is in connected mode and everything =/
21:07 phox the TCP link is over IPoIB, FWIW
21:07 phox that said, thanks for the input.  not really sure wth it could be =/
21:08 phox could still be the part of the stack that exposes whatever iface it's after
21:08 phox of course if it would tell me what device file it tried to open that would be cool :)
21:09 phox can I even strace the server process or is it freshly forked when it tries to bring the brick up? =/
21:11 semiosis phox: strace -f to follow children
21:12 phox rdma_ucm did it
21:12 phox nfi what it opened but I -think- there we go
21:13 phox also didn't check clearly if it was that or a dep of that
21:13 phox but hey
21:14 StarBeas_ joined #gluster
21:14 phox client seems to be not going anywhere trying to mount it =/
21:14 phox waiting for that to time out or something so I can check out logs.
21:15 phox yeah just sitting there not telling me anything =/
21:16 phox [rdma.c:1079:gf_rdma_cm_event_handler] 0-rdma-attempt-client-0: cma event RDMA_CM_EVENT_REJECTED, error 8
21:16 phox auth.allow not set so I -presume- it's defaulting to anyone
21:18 * jiqiren still uses IPoIB because it is stable. :P
21:18 phox it's also slow
21:19 phox this doesn't seem to be blowing up, it's just being weird about actually letting me connect =/
21:21 tryggvil joined #gluster
21:29 andreask joined #gluster
21:37 tryggvil joined #gluster
21:40 manik joined #gluster
21:49 jskinne__ joined #gluster
21:51 chirino joined #gluster
21:57 phox bbl
22:11 torrancew joined #gluster
22:47 B21956 joined #gluster
22:51 daMaestro joined #gluster
23:04 StarBeast joined #gluster
23:29 glusterbot New news from resolvedglusterbugs: [Bug 904005] tests: skip time consuming mock builds for code-only changes <http://goo.gl/h0kIz>
23:48 sticky_afk joined #gluster
23:48 stickyboy joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary