Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-04-08

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:30 yinyin joined #gluster
00:45 vpshastry joined #gluster
00:54 sputnik13 joined #gluster
00:57 Slash joined #gluster
01:12 jag3773 joined #gluster
01:12 vpshastry joined #gluster
01:22 jmarley joined #gluster
01:22 jmarley joined #gluster
01:31 Slash joined #gluster
01:45 haomaiwa_ joined #gluster
01:45 lyang0 joined #gluster
01:48 jag3773 joined #gluster
01:58 haomaiwa_ joined #gluster
02:04 raghug joined #gluster
02:06 haomai___ joined #gluster
02:06 pk joined #gluster
02:07 pk left #gluster
02:21 harish joined #gluster
02:55 rjoseph joined #gluster
02:59 pk joined #gluster
03:01 nightwalk joined #gluster
03:05 shubhendu joined #gluster
03:10 pk joined #gluster
03:17 spandit joined #gluster
03:22 lpabon joined #gluster
03:24 vpshastry joined #gluster
03:31 hagarth joined #gluster
03:45 raghu joined #gluster
03:50 rjoseph joined #gluster
03:57 itisravi joined #gluster
04:10 atinm joined #gluster
04:17 yinyin joined #gluster
04:18 shylesh joined #gluster
04:18 ppai joined #gluster
04:19 haomaiwang joined #gluster
04:19 hagarth joined #gluster
04:20 davinder joined #gluster
04:20 rjoseph joined #gluster
04:22 dusmant joined #gluster
04:25 deepakcs joined #gluster
04:26 ngoswami joined #gluster
04:28 ndarshan joined #gluster
04:29 rastar joined #gluster
04:33 vpshastry joined #gluster
04:50 nishanth joined #gluster
04:58 meghanam joined #gluster
04:58 meghanam_ joined #gluster
05:01 ravindran1 joined #gluster
05:01 benjamin_____ joined #gluster
05:03 bharata-rao joined #gluster
05:13 prasanth_ joined #gluster
05:20 ravindran1 left #gluster
05:21 chirino joined #gluster
05:23 aravindavk joined #gluster
05:28 sputnik13 joined #gluster
05:30 sahina joined #gluster
05:31 lalatenduM joined #gluster
05:35 nightwalk joined #gluster
05:36 cyber_si joined #gluster
05:46 eightyeight joined #gluster
05:51 nshaikh joined #gluster
06:04 kdhananjay joined #gluster
06:05 RameshN joined #gluster
06:06 Pavid7 joined #gluster
06:18 sputnik13 joined #gluster
06:18 davinder joined #gluster
06:21 ravindran1 joined #gluster
06:29 jtux joined #gluster
06:34 eightyeight joined #gluster
06:38 eightyeight joined #gluster
06:40 eightyeight joined #gluster
06:42 vimal joined #gluster
06:46 dusmant joined #gluster
06:47 Pavid7 joined #gluster
06:50 saurabh joined #gluster
06:52 kanagaraj joined #gluster
06:55 ctria joined #gluster
07:00 pk joined #gluster
07:04 eseyman joined #gluster
07:06 eightyeight joined #gluster
07:08 psharma joined #gluster
07:12 vpshastry1 joined #gluster
07:12 stickyboy Woke up this morning to find a RAID controller had derped and removed 12 drives from service.
07:13 stickyboy GlusterFS is in replicate configuration, and FUSE clients seem to be unaffected...
07:13 stickyboy (other than filling up a ton of log files)
07:13 stickyboy Clients are writing at 200MB/sec throughout the whole ordeal...
07:14 vpshastry2 joined #gluster
07:17 nightwalk joined #gluster
07:18 fsimonce joined #gluster
07:20 toad__ joined #gluster
07:21 keytab joined #gluster
07:31 kevein joined #gluster
07:31 dusmant joined #gluster
07:33 root__ joined #gluster
07:33 root__ who
07:36 root__ hello
07:36 glusterbot root__: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
07:38 root__ gluster on kvm
07:39 root__ \help
07:39 Durzo joined #gluster
07:41 vpshastry1 joined #gluster
08:06 andreask joined #gluster
08:09 X3NQ joined #gluster
08:23 Durzo joined #gluster
08:26 davinder joined #gluster
08:41 lalatenduM joined #gluster
08:41 harish joined #gluster
08:46 ceiphas joined #gluster
08:46 ceiphas hello
08:46 glusterbot ceiphas: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
08:47 gdubreui joined #gluster
08:48 jiku joined #gluster
08:48 ceiphas i have two xfs-formatted bricks in a volume that is mounted on the two hosts with the bricks and a client. i get I/O Errors on all but one of the bricks if i want to list the volumes content
08:53 saravanakumar joined #gluster
09:03 Durzo ceiphas, define 'list the volumes content' ?
09:03 ceiphas Durzo, ls -al /mount/volume/
09:03 Durzo how are the IO errors presenting themselves?
09:04 ceiphas wait, translating...
09:04 ceiphas ls: readig directory /mount/dbtemp/ : input/output error
09:05 Joe630 joined #gluster
09:06 Durzo from the client?
09:07 ceiphas and one of the brick hosts
09:08 ceiphas the other can list the contents
09:08 Durzo sorry, you have mounted from a client and you are getting ls IO errors on a brick? how does that work unless you are doing the ls from the brick ?
09:12 shubhendu joined #gluster
09:13 Durzo ceiphas, either way.. .input output errors usually indicate out of sync nodes
09:13 Durzo is your volume healthy? have you tried to run a heal [info|heal-failed] ?
09:13 ceiphas yes, shows nothing
09:14 Durzo and peer status?
09:17 ceiphas i connected the three glusters and made a rsync to the volume, after that the fs was not accessible anymore
09:17 Durzo what do your brick logs say?
09:21 ceiphas sooo... i deleted the complete gluster configuration and recreated the volume again
09:21 Durzo ok
09:21 ceiphas i can read and write the volume from all three hosts
09:21 ceiphas let's see what happens if i start the rsync again
09:22 nightwalk joined #gluster
09:22 Durzo how exactly did you delete the gluster config? its tricky doing that, most of the time there is filesystem xattrs left in the brick paths
09:22 ceiphas i stopped glusterd, truncated /var/lib/glusterd and the brick
09:23 harish joined #gluster
09:23 Durzo did you remove the hidden gluster dir in the brick path ?
09:24 ceiphas my brick was in /bricks/dbtemp/brick and i deleted this dir and recreated it
09:24 Durzo ok
09:25 ceiphas /bricks/dbtemp is a mounted lv formatted with xfs
09:25 Durzo if you have issues from here on, take a look at http://joejulian.name/blog/glusterfs-path-or-a-prefix-of-it-is-already-part-of-a-volume/  and the "The Solution" commands to remove the filesystem attrs
09:25 glusterbot Title: GlusterFS: {path} or a prefix of it is already part of a volume (at joejulian.name)
09:26 ceiphas i just started rsync again, seems as if it works now...
09:27 ceiphas i'm not sure why the volume broke, but i don't know if i can use such a fragile system in a production environment
09:29 al joined #gluster
09:29 Durzo ceiphas, what version of gluster?
09:29 ceiphas 3.4.2
09:29 ninkotech_ joined #gluster
09:29 ninkotech__ joined #gluster
09:30 hagarth joined #gluster
09:30 Durzo ceiphas, ok.. not sure what happend to your original volume but many of us are running gluster in production serving millions of files without issues
09:31 ceiphas if i knew what i did wrong i would feel better
09:31 Durzo as would i, unfortunately you blew it away.. perhaps there is info in the brick logs
09:31 Durzo if you still have them
09:32 Durzo should be /var/log/glusterfs/
09:32 jmarley joined #gluster
09:32 ceiphas the brick logs are nearly empty
09:33 Durzo any other logs?
09:33 ceiphas but i get warnings here: tail -f /var/log/glusterfs/glustershd.log
09:33 ceiphas W [socket.c:514:__socket_rwv] 0-dbtemp-client-1: readv failed (No data available)
09:33 ceiphas or is this the rsync?
09:34 ceiphas i get this every 3 seconds
09:34 ctria joined #gluster
09:34 glusterbot New news from newglusterbugs: [Bug 1085259] Client crashes while doing xattr operations and other basic fop tests <https://bugzilla.redhat.com/show_bug.cgi?id=1085259>
09:35 al joined #gluster
09:35 kdhananjay joined #gluster
09:48 ilbot3 joined #gluster
09:48 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
09:48 yosafbridge joined #gluster
09:49 juhaj joined #gluster
09:49 shubhendu joined #gluster
09:49 tg2 joined #gluster
09:49 xavih joined #gluster
09:50 purpleidea joined #gluster
09:51 ceiphas gluster peer status shows Hostname: 10.24.1.21, but the ip could be resolved backwards
09:51 ceiphas if i probe an already probed host with another name could that create problems?
09:51 Peanut__ joined #gluster
09:51 Durzo most probably
09:52 ceiphas but why is there an ip and not the hostname?
09:52 jruggiero joined #gluster
09:52 m0zes joined #gluster
09:52 SteveCooling joined #gluster
09:52 Durzo my guess is that gluster couldnt do a rdns on it ?
09:53 ceiphas but i can
09:53 ceiphas using nss and direct dns works
09:53 Durzo your /etc/resolv.conf was correct at the time gluster daemons starting?
09:53 Durzo you are not relying on /etc/hosts ?
09:54 xymox joined #gluster
09:55 the-me joined #gluster
09:55 ceiphas Durzo, i work in an enterprise environment, i have three dns server accessible two of them even from wan
09:55 Durzo restart your glusterd ?
09:56 nishanth joined #gluster
09:56 hagarth joined #gluster
09:56 pk joined #gluster
09:56 ppai joined #gluster
09:56 vimal joined #gluster
09:56 aravindavk joined #gluster
09:56 sahina joined #gluster
09:56 fyxim_ joined #gluster
09:56 ceiphas Durzo, changes nothing
09:56 kkeithley joined #gluster
09:57 deepakcs joined #gluster
09:57 edward1 joined #gluster
09:57 ctria joined #gluster
09:57 radez_g0n3 joined #gluster
09:57 vpshastry1 joined #gluster
09:57 Durzo did your init script actually stop the glusterd's ? the redhat / ubuntu one does not.. issueing a gluster stop leaves processes laying around you have to kill manually :/
09:57 meghanam_ joined #gluster
09:57 avati joined #gluster
09:58 ceiphas Durzo, seems like it is the same here
09:58 ceiphas now i get io errors again
09:58 shubhendu joined #gluster
09:59 nishanth joined #gluster
09:59 vpshastry1 joined #gluster
10:00 ceiphas it is all borked up again
10:00 ceiphas sorry but i cannot use a "failover" system that breaks when on of two hosts goes down
10:00 Durzo no doubt, but it shouldnt be doing that
10:01 Durzo what do your logs show now ?
10:01 ceiphas on the erroneous client (also a brick host) or the working brick host (also working client)?
10:02 Durzo the gluster peers
10:02 ceiphas all three?
10:02 Durzo the client is not a peer
10:02 Durzo just the servers
10:02 ceiphas i have three peers, two storing bricks and mounting the volume, one only mounting
10:02 Durzo i mean a peer as per 'gluster peer status'
10:03 ceiphas ahhh then ill detach the client
10:03 Alex joined #gluster
10:03 VeggieMeat joined #gluster
10:03 marcoceppi joined #gluster
10:03 sulky joined #gluster
10:03 delhage joined #gluster
10:03 tjikkun joined #gluster
10:03 DV joined #gluster
10:03 NuxRo joined #gluster
10:03 marcoceppi joined #gluster
10:03 delhage joined #gluster
10:03 tjikkun joined #gluster
10:03 NuxRo joined #gluster
10:03 bazzles joined #gluster
10:04 hchiramm_ joined #gluster
10:04 verdurin joined #gluster
10:04 asku joined #gluster
10:04 ceiphas Durzo, [server-rpc-fops.c:1572:server_open_cbk] 0-dbtemp-server: 1034673: OPEN (null) (c653c818-dfed-47a0-8a59-a7aad31c545f) ==> (No such file or directory)
10:04 ceiphas [server-rpc-fops.c:243:server_inodelk_cbk] 0-dbtemp-server: 1034672: INODELK (null) (c653c818-dfed-47a0-8a59-a7aad31c545f) ==> (No such file or directory)
10:05 ceiphas lots of these
10:05 Durzo can you run 'gluster volume heal <VOLNAME> split-brain' ?
10:06 _NiC joined #gluster
10:06 Durzo sorry that should be: info split-brain
10:06 ceiphas on both?
10:06 iamben_tw joined #gluster
10:06 Durzo just one
10:06 ceiphas number of entries 0
10:07 ceiphas for both bricks
10:07 glusterbot joined #gluster
10:07 Durzo gluster volume status <VOLNAME> shows Y accross the board?
10:08 ceiphas yes
10:08 ceiphas is it problematic that the host already has an nfs server running?
10:09 ceiphas other way round i don't need gluster-nfs-server
10:10 Durzo i have never run them on the same server but it doesnt sound like something that should be together...
10:10 primusinterpares joined #gluster
10:10 Durzo time for me to leave work... sorry i couldnt help anymore. one of the devs should be online in a few hours.. try them later
10:13 nightwalk joined #gluster
10:13 ceiphas k
10:14 marcoceppi joined #gluster
10:14 marcoceppi joined #gluster
10:14 VeggieMeat joined #gluster
10:16 ultrabizweb joined #gluster
10:16 delhage joined #gluster
10:24 ceiphas [2014-04-08 10:17:54.076346] I [server-rpc-fops.c:1572:server_open_cbk] 0-dbtemp-server: 1780219: OPEN (null) (350b1d74-e83a-4588-9f2e-e4a91eb35193) ==> (No such file or directory)
10:24 ceiphas [server-rpc-fops.c:1572:server_open_cbk] 0-dbtemp-server: 1780219: OPEN (null) (350b1d74-e83a-4588-9f2e-e4a91eb35193) ==> (No such file or directory)
10:25 ceiphas i get billions of these
10:26 larsks joined #gluster
10:26 hflai joined #gluster
10:27 ninkotech__ joined #gluster
10:27 mtanner joined #gluster
10:27 fsimonce joined #gluster
10:27 mwoodson joined #gluster
10:27 Kins joined #gluster
10:27 eseyman joined #gluster
10:27 ron-slc joined #gluster
10:27 nikk joined #gluster
10:27 avati joined #gluster
10:27 bfoster joined #gluster
10:27 sac_ joined #gluster
10:27 askb joined #gluster
10:27 Psi-Jack joined #gluster
10:27 madphoenix joined #gluster
10:27 ThatGraemeGuy joined #gluster
10:27 pjschmitt joined #gluster
10:27 monotek joined #gluster
10:27 rshade98 joined #gluster
10:27 foster joined #gluster
10:27 abyss_ joined #gluster
10:27 jurrien joined #gluster
10:27 stigchristian joined #gluster
10:27 lanning joined #gluster
10:27 Guest95587 joined #gluster
10:27 joshin joined #gluster
10:27 andrewklau joined #gluster
10:27 mwoodson joined #gluster
10:27 atinm joined #gluster
10:27 Norky joined #gluster
10:27 joshin joined #gluster
10:27 benjamin_____ joined #gluster
10:27 sahina joined #gluster
10:27 gdubreui joined #gluster
10:27 avati joined #gluster
10:27 sac_ joined #gluster
10:27 lyang0 joined #gluster
10:27 askb joined #gluster
10:27 jiku joined #gluster
10:27 atinm joined #gluster
10:27 sahina joined #gluster
10:28 RameshN joined #gluster
10:28 RameshN joined #gluster
10:28 qdk joined #gluster
10:28 kanagaraj joined #gluster
10:29 lalatenduM joined #gluster
10:29 shylesh joined #gluster
10:29 glusterbot` joined #gluster
10:29 prasanth_ joined #gluster
10:29 ctria joined #gluster
10:29 efries joined #gluster
10:30 ceiphas my brick log is exploding... more than 200MB in the last minutes
10:30 GabrieleV joined #gluster
10:30 DV joined #gluster
10:30 hagarth joined #gluster
10:30 ikk joined #gluster
10:30 kdhananjay joined #gluster
10:30 siel joined #gluster
10:30 siel joined #gluster
10:31 uebera|| joined #gluster
10:31 uebera|| joined #gluster
10:32 eshy joined #gluster
10:32 systemonkey joined #gluster
10:32 misuzu joined #gluster
10:39 Slasheri joined #gluster
10:39 Slasheri joined #gluster
10:39 vincent_vdk joined #gluster
10:39 partner joined #gluster
10:39 SteveCooling joined #gluster
10:39 avati joined #gluster
10:39 mtanner joined #gluster
10:40 Psi-Jack joined #gluster
10:40 mwoodson joined #gluster
10:40 rastar joined #gluster
10:40 harish joined #gluster
10:40 Peanut__ joined #gluster
10:40 Kins joined #gluster
10:40 brosner joined #gluster
10:40 asku joined #gluster
10:40 ppai joined #gluster
10:40 saurabh joined #gluster
10:40 psharma joined #gluster
10:40 sac_ joined #gluster
10:40 vimal joined #gluster
10:40 Durzo joined #gluster
10:40 hflai joined #gluster
10:40 kkeithley joined #gluster
10:40 mwoodson joined #gluster
10:40 avati joined #gluster
10:40 johnmark joined #gluster
10:40 xymox joined #gluster
10:40 kkeithley1 joined #gluster
10:40 ninkotech__ joined #gluster
10:40 vpshastry1 joined #gluster
10:40 samppah joined #gluster
10:40 cyberbootje joined #gluster
10:40 edward1 joined #gluster
10:40 rastar joined #gluster
10:40 vimal joined #gluster
10:40 sac_ joined #gluster
10:40 psharma joined #gluster
10:40 saurabh joined #gluster
10:40 T0aD joined #gluster
10:40 kkeithley1 joined #gluster
10:40 elico joined #gluster
10:40 vpshastry1 joined #gluster
10:41 edward1 joined #gluster
10:41 aurigus joined #gluster
10:41 aurigus joined #gluster
10:41 ccha are tehre alot files in .glusterfs/indices/xattrop/ ?
10:45 ceiphas inside the brick? no i dont have this directory
10:48 ira_ joined #gluster
10:52 ceiphas i have this error: https://bugzilla.redhat.com/show_bug.cgi?id=761770
10:52 glusterbot Bug 761770: medium, low, ---, aavati, CLOSED CURRENTRELEASE, ls: Input/Output Error on client in mounted volume 64bit kernel/ 32bit userland
10:52 ceiphas but this error is atill in 3.4.2
10:52 ceiphas still
10:52 kshlm joined #gluster
10:53 Calum joined #gluster
10:54 hagarth joined #gluster
10:57 wgao joined #gluster
11:00 cyber_si you have 32bit client and 64bit server?
11:01 NuxRo joined #gluster
11:09 ceiphas cyber_si, yes
11:11 cyber_si already tried ino32 volume option?
11:12 ceiphas ?
11:12 ceiphas i think not
11:12 cyber_si nfs.enable-ino32
11:17 ceiphas i have nfs disabled as i have to use the kNFS
11:18 ceiphas changes nothing
11:24 joshuay04 joined #gluster
11:25 cyber_si what's about fuse client mount, how client mount volumes? Or kNFS servers exports raw bricks from gluster peers?
11:28 joshuay04 I am currently using Ubuntu KVM and Ceph in production. All my users are claiming our servers are too slow on read. I saw gluster has faster read speeds, so I am looking at gluster and ovirt. I have a few questions, can gluster self heal?
11:33 dusmant joined #gluster
11:34 diegows joined #gluster
11:34 kanagaraj joined #gluster
11:34 ceiphas cyber_si, i use fuse to mount the fs NFS has nothing to do with gluster atm on my machines
11:36 cyber_si fuse have enable-ino32 option too
11:37 ceiphas client has it enabled in fstab
11:37 glusterbot New news from newglusterbugs: [Bug 1031973] mount.glusterfs exits with code 0 even after failure. <https://bugzilla.redhat.com/show_bug.cgi?id=1031973>
11:53 keytab joined #gluster
11:55 gdubreui joined #gluster
12:02 ceiphas cyber_si, any other hint
12:03 glusterbot New news from resolvedglusterbugs: [Bug 761770] ls: Input/Output Error on client in mounted volume 64bit kernel/ 32bit userland <https://bugzilla.redhat.com/show_bug.cgi?id=761770>
12:06 ceiphas hey that was me
12:06 benjamin_____ joined #gluster
12:09 Durzo joined #gluster
12:10 tjikkun joined #gluster
12:13 B21956 joined #gluster
12:17 itisravi joined #gluster
12:17 marcoceppi joined #gluster
12:18 VeggieMeat joined #gluster
12:18 marcoceppi joined #gluster
12:21 DV joined #gluster
12:21 ultrabizweb joined #gluster
12:23 RameshN joined #gluster
12:23 dusmant joined #gluster
12:28 Ark joined #gluster
12:32 tomased joined #gluster
12:32 kshlm joined #gluster
12:40 bala1 joined #gluster
12:42 ceiphas any hint how to resolve this bug? https://bugzilla.redhat.com/show_bug.cgi?id=761770
12:42 glusterbot Bug 761770: medium, low, ---, aavati, CLOSED CURRENTRELEASE, ls: Input/Output Error on client in mounted volume 64bit kernel/ 32bit userland
12:42 Philambdo joined #gluster
12:45 shyam joined #gluster
12:46 ndevos ceiphas: how about bug 924087?
12:46 glusterbot Bug https://bugzilla.redhat.com:443/show_bug.cgi?id=924087 is not accessible.
12:46 ndevos oh, maybe thats the wrong one
12:47 jag3773 joined #gluster
12:48 ndevos ceiphas: I know of a bug with 32-bit server, 64-bit client, thats not resolved yet
12:48 * ndevos just needs to find it
12:49 ndevos bug 1074023
12:49 glusterbot Bug https://bugzilla.redhat.com:443/show_bug.cgi?id=1074023 high, unspecified, ---, ndevos, POST , list dir with more than N files results in Input/output error
12:54 bennyturns joined #gluster
12:54 benjamin_____ joined #gluster
12:56 ThatGraemeGuy joined #gluster
12:59 ProT-0-TypE joined #gluster
13:00 tdasilva joined #gluster
13:03 glusterbot New news from resolvedglusterbugs: [Bug 1081605] glusterd should not create zombies if xfsprogs or e2fsprogs are missing <https://bugzilla.redhat.com/show_bug.cgi?id=1081605>
13:04 Isaacabo joined #gluster
13:05 Isaacabo Morning
13:06 sroy joined #gluster
13:07 the-me joined #gluster
13:08 Isaacabo Guys, quick question, yesterday i got some help from Andyy2, regarding a move-brick operation. it went wrong and now i need to move manually the files
13:10 nshaikh joined #gluster
13:11 Isaacabo If the file is not ont the brick will show the file with a T
13:11 Isaacabo Ex: ---------T. 2 root root      0 Mar 29 18:36 10.jpg
13:16 ceiphas ndevos, is there any workaround for this bug?
13:17 ndevos ceiphas: you can use nfs for mounting
13:17 ndevos ceiphas: for the 32-bit server, 64-bit client issue
13:17 ceiphas hmm lemme try
13:18 ceiphas same with 64bit server and 32bit client?
13:18 Ark joined #gluster
13:18 ceiphas thats what i have
13:18 ndevos I do not know what the issue with that is
13:19 ndevos but, I guess that using nfs works too - at least for any 32/64-bit incompatibility issues in the protocol
13:19 ceiphas is your bug only for 32bit server 64bit client or also other way around?
13:19 ceiphas is it possible to only enable nfs sharing for one brick?
13:20 ceiphas one of the bricks needs to keep its kNFS
13:21 ndevos ai, no, you can not enable nfs partially :-/
13:22 ndevos you can enable/disable per volume, but that does not really help you here....
13:22 ceiphas nope
13:22 japuzzo joined #gluster
13:22 Philambdo joined #gluster
13:22 ndevos and he bug I have is really only 32-bit server, 64-bit client, but there may be similar issues the other way around
13:25 ndevos ceiphas: what you could do, is create a nfs.vol file and start the glusterfs process manually, that way you can control on which storage server it is running
13:25 ndevos ceiphas: or, if you are lucky, you may have a /var/lib/gluserd/nfs/nfs.vol file already
13:25 Isaacabo answered my question, sorry for the troubles.
13:27 jruggier1 joined #gluster
13:28 rwheeler joined #gluster
13:34 nightwalk joined #gluster
13:38 [o__o] joined #gluster
13:46 kanagaraj joined #gluster
13:47 cfeller joined #gluster
13:48 dbruhn joined #gluster
13:49 dbruhn Morning all
13:51 Pavid7 joined #gluster
13:52 nightwalk joined #gluster
13:53 rpowell joined #gluster
13:56 nightwalk joined #gluster
13:58 shyam joined #gluster
14:02 talsne joined #gluster
14:04 talsne what would happen in the hypothetical (cough) situation where a partition was accidentally mounted as two different bricks? I was playing around with a test system and ended up with /dev/sdb1 mounted as /mnt/bricks/brick1 *and* /mnt/bricks/brick2.
14:12 kaptk2 joined #gluster
14:16 nishanth joined #gluster
14:18 jobewan joined #gluster
14:34 rsouthard joined #gluster
14:35 nishanth joined #gluster
14:36 rsouthard hey guys. trying to delete a volume, and it is failing. [cli-rpc-ops.c:692:gf_cli3_1_delete_volume_cbk] 0-: Returning with 16. Any ideas?
14:36 rsouthard this is gluster 3.2
14:45 eightyeight joined #gluster
14:45 ryand joined #gluster
14:45 JonnyNomad joined #gluster
14:45 ccha joined #gluster
14:45 overclk joined #gluster
14:45 cyber_si joined #gluster
14:45 divbell joined #gluster
14:45 neoice joined #gluster
14:45 Andyy2 joined #gluster
14:45 haomaiwang joined #gluster
14:45 atrius joined #gluster
14:45 JoeJulian joined #gluster
14:46 sticky_afk joined #gluster
14:46 saravanakumar joined #gluster
14:46 stickyboy joined #gluster
14:46 Abrecus joined #gluster
14:46 VerboEse joined #gluster
14:46 eryc_ joined #gluster
14:47 Jakey joined #gluster
14:47 Dasberger joined #gluster
14:49 XpineX joined #gluster
14:49 kmai007 joined #gluster
14:49 Oneiroi joined #gluster
14:49 kmai007 @ports
14:49 glusterbot kmai007: glusterd's management port is 24007/tcp and 24008/tcp if you use rdma. Bricks (glusterfsd) use 24009 & up for <3.4 and 49152 & up for 3.4. (Deleted volumes do not reset this counter.) Additionally it will listen on 38465-38467/tcp for nfs, also 38468 for NLM since 3.3.0. NFS also depends on rpcbind/portmap on port 111 and 2049 since 3.4.
14:49 atrius` joined #gluster
14:50 JonathanD joined #gluster
14:51 edong23 joined #gluster
14:55 badone joined #gluster
14:59 ceiphas ndevos: https://bugzilla.redhat.com/show_bug.cgi?id=1085425
14:59 glusterbot Bug 1085425: high, unspecified, ---, csaba, NEW , Input/Output Errors with 64bit Server and 32bit client
15:01 ndevos ceiphas: thanks, can you confirm that you do not have a problem when the directory only contains few files?
15:01 rwheeler joined #gluster
15:01 ndevos well, I think you did: When i mount it on a 32bit host everything works fine until i add a lot of files.
15:01 * kkeithley_ was just going to say that that sounds like a dupe of 1074023
15:02 ndevos kkeithley1: yeah, but the reverse :)
15:02 plarsen joined #gluster
15:02 tdasilva joined #gluster
15:03 zaitcev joined #gluster
15:03 chirino joined #gluster
15:06 ceiphas if i remove the files?
15:07 ceiphas i added 1074023 as a similar bug
15:07 kmai007 when you mount glusterNFS, how does it choose to UDP port 633 on the client?
15:08 glusterbot New news from newglusterbugs: [Bug 1085425] Input/Output Errors with 64bit Server and 32bit client <https://bugzilla.redhat.com/show_bug.cgi?id=1085425>
15:08 jmarley joined #gluster
15:08 jmarley joined #gluster
15:08 kkeithley_ gluster NFS doesn't use UDP
15:08 ceiphas ndevos, the strange thing is, i can still touch (create) files even in this state
15:08 ndevos kmai007: I think that would be the rpc.statd service? you can check that with the rpcinfo command
15:08 kmai007 so how would you interpret this output
15:09 kmai007 http://fpaste.org/92568/69697761/
15:09 glusterbot Title: #92568 Fedora Project Pastebin (at fpaste.org)
15:09 ndevos ceiphas: than it sounds as if the problem is with the READDIR(P) procedure, similar to bug 1074023
15:09 glusterbot Bug https://bugzilla.redhat.com:443/show_bug.cgi?id=1074023 high, unspecified, ---, ndevos, POST , list dir with more than N files results in Input/output error
15:11 ndevos kmai007: that port is likely the udp port where MOUNTD is listening on, rpcinfo will show you that
15:12 ndevos kmai007: if you are running rpc.mountd on the storage server, it will register at the portmapper (rpcbind) and will conflict with the gluster/nfs server
15:13 ndevos kmai007: you can only run one nfs-server on the storage servers, for exporting a gluster volume, you really want to use the gluster/nfs server
15:13 ceiphas ndevos, if i delete the files the fs gets accessible again
15:14 kmai007 ndevos: thanks, i'll check the storage servers
15:14 rsouthard hey guys. trying to delete a volume, and it is failing. [cli-rpc-ops.c:692:gf_cli3_1_delete_volume_cbk] 0-: Returning with 16. Any ideas?
15:14 ndevos ceiphas: interesting, that definitely sounds like a READDIR(P) problem - maybe the fix for 1074023 works in your case too
15:15 kmai007 ndevos: the client will mount the gluster volume as NFS. i'm just trying to understand how it picks that UDP port if it doesn't use UDP, i checked rpcinfo and its not the mountd port either
15:16 kmai007 i think i'm chasing a redharring
15:17 kmai007 on the storage server i see that its connected via TCP on :2049
15:18 ndevos kmai007: you have to understand how mounting an NFS export works... the nfs-client will contact the portmapper on the nfs-server, it asks for the port where mountd is listening on, then contacts mountd and asks for the initial file-handle of the nfs-export, after that, the nfs-client asks the portmapper what port is used for nfs, and the nfs-client contacts that port and passes the filehandle for an initial lookup (like a verification)
15:19 ndevos kmai007: mountd can use tcp or udp, the gluster/nfs server supports both for the MOUNT protocol
15:19 ndevos kmai007: port 2049 is the default NFS port, and gluster/nfs only supports tcp for NFS
15:20 ndevos kmai007: both of these ports are registered in the portmapper, 'rpcinfo -p $SERVER' should show the ports and programs that are available
15:23 ndevos kmai007: does it make (more) sense now?
15:24 cyber_si http://fpaste.org/92571/
15:24 glusterbot Title: #92571 Fedora Project Pastebin (at fpaste.org)
15:24 cyber_si with this config you can run both nfs-server and glusternfs
15:26 ndevos cyber_si: yes, that should work - I didnt know there is a nfs.register-with-portmap option :)
15:29 daMaestro joined #gluster
15:29 rpowell joined #gluster
15:31 rwheeler joined #gluster
15:46 user_42 joined #gluster
15:58 ira joined #gluster
15:59 plarsen joined #gluster
15:59 vpshastry joined #gluster
16:10 John___ joined #gluster
16:11 John___ Greetings.
16:12 John___ I'm having some issues updating gluster from the yum repo. I keep getting "error: glusterfs-libs-3.4.3-2.el5: Header V4 RSA/SHA1 signature: BAD, key ID 4ab22bb3"
16:12 John___ I am running CentOS 5.10
16:17 Darnoth joined #gluster
16:18 Ark joined #gluster
16:21 vpshastry joined #gluster
16:22 kkeithley_ John___: I'm looking
16:27 John___ Thanks kkeithley
16:27 Mo__ joined #gluster
16:28 user_42 joined #gluster
16:30 hagarth joined #gluster
16:31 gmcwhistler joined #gluster
16:31 kkeithley_ John___: I resigned the rpms, the epel-5 part of the yum repo has the resigned rpms. Please retry. You might need to `yum clean all` first.
16:34 John___ kkeithley, no go. "error: glusterfs-libs-3.4.3-2.el5: Header V4 RSA/SHA1 signature: BAD, key ID 4ab22bb3"
16:34 John___ I did a clean all first
16:35 kkeithley_ hmmm
16:38 kmai007 ndevos: thank you very much....
16:38 rpowell joined #gluster
16:38 kmai007 I finally was able to get the glusterNFS volume mounted outside my DMZ, with opening port 2049 in addition to 3 TCP ports
16:39 rpowell left #gluster
16:40 kkeithley_ John___: that's what I get for believing someone else. I've deleted the sigs (for now) from the epel-5 rpms until I can figure how to get them signed properly. another yum clean all may be necessary
16:40 ndevos John___: you can check if you have the pubkey for the packages with: rpm -qa 'gpg-pubkey' | grep 4ab22bb3
16:41 ndevos John___: 'rpm -qi gpg-pubkey-4ab22bb3-52c71f1e' should also show some details of the key itself
16:42 ndevos and the key located on http://download.gluster.org/pub/gluster/glusterfs/3.4/LATEST/EPEL.repo/pub.key , use 'rpm --import pub.key' to import it in the rpm-database
16:42 kkeithley_ I think it's the Header V4 that's it's complaining about. el-5 supposedly only supports V3.
16:42 ndevos oh, is it?
16:43 * ndevos can imagine that it could be, its been a while he's been signing rpms
16:43 John___ [root@gluster01 ~]# rpm -qa 'gpg-pubkey' | grep 4ab22bb3 [root@gluster01 ~]# rpm -qi gpg-pubkey-4ab22bb3-52c71f1e package gpg-pubkey-4ab22bb3-52c71f1e is not installed
16:44 ndevos John___: so, it seems that you do not have the public key imported
16:44 kkeithley_ I think. I stopped signing the el-5 rpms, then someone told me about adding this to the .rpmacros file: %__gpg_sign_cmd %{__gpg} --force-v3-sigs --batch --no-verbose --no-armor --passphrase-fd 3 --no-secmem-warning -u "%{_gpg_name}" -sbo %{__signature_filename} %{__plaintext_filename}
16:44 kkeithley_ which seemed to work for some people
16:45 jruggiero left #gluster
16:45 ndevos aah, that reminds me! yes, something like that might be needed, but it should work for everyone :)
16:45 kkeithley_ so I'll put the signed rpms back
16:45 ndevos btw, there is no pubkey listed in http://download.gluster.org/pub/gluster/glusterfs/3.4/LATEST/EPEL.repo/glusterfs-epel.repo, maybe you can add that?
16:46 John___ ok. I installed the pub.key, but still got the error
16:46 ndevos :-/
16:46 John___ [root@gluster01 ~]# rpm -qa 'gpg-pubkey' | grep 4ab22bb3 gpg-pubkey-4ab22bb3-52c71f1e
16:47 John___ gpg-pubkey-4ab22bb3-52c71f1e
16:47 ndevos John___: do you get the same error, or something else?
16:47 John___ Same error; "error: glusterfs-libs-3.4.3-2.el5: Header V4 RSA/SHA1 signature: BAD, key ID 4ab22bb3"
16:48 ndevos John___: maybe use --nogpgcheck after you verified the source of the packages?
16:49 kkeithley_ the rpms that are there now aren't signed, so I don't know why you'd get bad signature
16:50 John___ neither am I. Everything worked fine last update.
16:54 vpshastry joined #gluster
16:58 shyam joined #gluster
17:00 plarsen joined #gluster
17:02 sputnik13 joined #gluster
17:03 Ark joined #gluster
17:03 pyros joined #gluster
17:03 pyros hello
17:03 glusterbot pyros: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
17:08 pyros Help please. just rebooted and i cant start gluster deamon. Out  connection failded . Check if daemon is operational
17:08 glusterbot New news from newglusterbugs: [Bug 1058300] VMs do not resume after paused state and storage connection to a gluster domain (they will also fail to be manually resumed) <https://bugzilla.redhat.com/show_bug.cgi?id=1058300>
17:08 pyros greetings and thanks
17:08 user_42 joined #gluster
17:09 tomased joined #gluster
17:09 user_42 Hi All! I have problems connecting a client (centos 6.5) to master (centos 6.5). gluster version on both sides is 3.4.2. With another master (same os and versions) it is working. Error is: No route to host. daemon is listening on 24007. I see one request coming (tcpdump on 24007) but thats it. Any ideas?
17:09 pyros ports 1000 10001 1000n
17:10 pyros you need open them and forward to the node
17:10 pyros I have opened in gluster 3.4 from 1000 to 1050
17:10 ndevos ~ports | user_42
17:10 glusterbot user_42: glusterd's management port is 24007/tcp and 24008/tcp if you use rdma. Bricks (glusterfsd) use 24009 & up for <3.4 and 49152 & up for 3.4. (Deleted volumes do not reset this counter.) Additionally it will listen on 38465-38467/tcp for nfs, also 38468 for NLM since 3.3.0. NFS also depends on rpcbind/portmap on port 111 and 2049 since 3.4.
17:11 * ndevos is out for the day, cya!
17:12 pyros please help
17:12 pyros just reboot
17:13 pyros daemon is not operational
17:13 pyros E [glusterd-store.c:2487:glusterd_resolve_all_bricks] 0-glusterd: resolve brick failed in restore
17:13 pyros i have two nodes
17:14 pyros they are function since 2 month correctly
17:14 primechuck joined #gluster
17:14 pyros etc-glusterfs-glusterd.vol.log
17:14 pyros resolve brick failed in restore
17:16 user_42 @glusterbot: thanks! they are both listening on the same ports except the working server is listening on 49152, 49153, 49155 and *:ipcserver while the non working is not listening on these but additonally on *:nas.  installation was the same on both server!?
17:17 John___ kkeithley_, just a quick update. I've upgrade the front end node that mounts my glusterfs. That updated just fine.
17:17 John___ That is CentOS 6.5
17:20 pyros i have ubuntu 12.04.3 i cant start glusterfs daemon
17:20 pyros i just reboot system
17:27 pyros my log when start gluster-server
17:27 pyros I [glusterfsd.c:1910:main] 0-/usr/sbin/glusterd: Started running /usr/sbin/glusterd version 3.4.2 (/usr/sbin/glusterd -p /var/run/glusterd.pid)
17:27 pyros [2014-04-08 17:24:38.656611] I [glusterd.c:961:init] 0-management: Using /var/lib/glusterd as working directory
17:27 pyros [2014-04-08 17:24:38.657560] I [socket.c:3480:socket_init] 0-socket.management: SSL support is NOT enabled
17:27 pyros [2014-04-08 17:24:38.657576] I [socket.c:3495:socket_init] 0-socket.management: using system polling thread
17:27 pyros [2014-04-08 17:24:38.658065] W [rdma.c:4197:__gf_rdma_ctx_create] 0-rpc-transport/rdma: rdma_cm event channel creation failed (No such device)
17:27 pyros [2014-04-08 17:24:38.658079] E [rdma.c:4485:init] 0-rdma.management: Failed to initialize IB Device
17:27 pyros [2014-04-08 17:24:38.658087] E [rpc-transport.c:320:rpc_transport_load] 0-rpc-transport: 'rdma' initialization failed
17:27 pyros [2014-04-08 17:24:38.658168] W [rpcsvc.c:1389:rpcsvc_transport_create] 0-rpc-service: cannot create listener, initing the transport failed
17:27 pyros [2014-04-08 17:24:39.601517] I [glusterd-store.c:1339:glusterd_restore_op_version] 0-glusterd: retrieved op-version: 2
17:27 pyros [2014-04-08 17:24:39.605908] E [glusterd-store.c:1858:glusterd_store_retrieve_volume] 0-: Unknown key: brick-0
17:27 pyros [2014-04-08 17:24:39.605935] E [glusterd-store.c:1858:glusterd_store_retrieve_volume] 0-: Unknown key: brick-1
17:27 pyros [2014-04-08 17:24:39.966177] I [glusterd-handler.c:2818:glusterd_friend_add] 0-management: connect returned 0
17:27 pyros [2014-04-08 17:24:39.969108] I [glusterd-handler.c:2818:glusterd_friend_add] 0-management: connect returned 0
17:27 pyros [2014-04-08 17:24:39.971896] I [glusterd-handler.c:2818:glusterd_friend_add] 0-management: connect returned 0
17:27 pyros [2014-04-08 17:24:39.971981] I [rpc-clnt.c:962:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600
17:27 pyros [2014-04-08 17:24:39.972056] I [socket.c:3480:socket_init] 0-management: SSL support is NOT enabled
17:27 pyros [2014-04-08 17:24:39.972067] I [socket.c:3495:socket_init] 0-management: using system polling thread
17:27 pyros [2014-04-08 17:24:39.977052] I [rpc-clnt.c:962:rpc_clnt_connection_init] 0-management: se
17:27 pyros 0-: received signum (0), shutting down
17:27 pyros help me please
17:27 pyros the daemon not start
17:30 John___ almost seems more like a network device issue
17:30 John___ 0-rdma.management: Failed to initialize IB Device
17:30 pyros maybe ports?
17:30 John___ possibly. Are you using Melinox card by any chance? default drives VS MLNX OFED?
17:31 pyros whats IB Device?
17:31 John___ IB usually refers to infiniband. a network card most likely
17:31 pyros i am using amazon EC2
17:32 John___ never touched the EC2 =/
17:33 pyros they are virtual machines on demand
17:33 John___ ya. Not sure how those are setup/interact. never had a chance to play with it.
17:34 pyros anyway i have had gluster fs working properly in both servers
17:34 pyros (i have two nodes)
17:34 John___ I know I saw that same issue; had the wrong drivers installed for my Melinox card.
17:35 John___ but I'm working with physical hardware, not cloud based.
17:35 pyros i have to reinstall ??
17:35 badone joined #gluster
17:36 John___ I don't think so, but may take some tweaking of the network interfaces
17:37 swat30 joined #gluster
17:42 Pavid7 joined #gluster
17:43 jiffe98 joined #gluster
17:46 semiosis pyros didnt stay around long enough
17:55 vpshastry joined #gluster
17:55 kmai007 has anybody ever used this feature?
17:55 kmai007 Option: nfs.dynamic-volumes
17:55 kmai007 Default Value: (null)
17:55 kmai007 Description: Internal option set to tell gnfs to use a different scheme for encoding file handles when DVM is being used.
17:56 kmai007 i have a requirement to use glusterNFS
18:00 JoeJulian If you're securing your bricks with openssl, be aware of https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2014-0160
18:00 glusterbot Title: National Vulnerability Database (NVD) National Vulnerability Database (CVE-2014-0160) (at web.nvd.nist.gov)
18:01 lalatenduM joined #gluster
18:02 johnbot11 joined #gluster
18:04 johnbot11 Hi there. I see that Gluster3.5 will include improvements to quotas including moving over to server side enforcement. Will per-user quotas be possible?
18:06 kmai007 JoeJulian: how have you beeN?  going to summit?  I was wondering if you had any experience with the glusterNFS set features ?
18:06 JoeJulian I'm going to summit but I don't know how I'm getting in. I still don't have the registration code.
18:07 kmai007 me either...!
18:07 NuxRo JoeJulian: do you know who maintains gluster.org? it's vulnerable to the SSL bleed thingy
18:07 JoeJulian swell
18:07 JoeJulian I'm still looking for the package build for Centos6.5.
18:08 NuxRo JoeJulian: the openssl update is already in, landed last night or so
18:08 qdk joined #gluster
18:09 JoeJulian Must have been after my mirror sync. :(
18:10 NuxRo maybe
18:10 NuxRo you should upgrade asap
18:10 JoeJulian I'm doing my $dayjob first.
18:10 NuxRo people can see stuff
18:10 zerick joined #gluster
18:10 JoeJulian ... and revoking keys, etc...
18:11 NuxRo right
18:11 NuxRo are you the only one maintaining gluster.org?
18:11 JoeJulian 2 years... <sigh>
18:11 NuxRo i dont mind helping with a yum update every now and then
18:11 JoeJulian No, I'll post an email.
18:11 NuxRo right now the site is leaking private info
18:12 NuxRo i just "sniffed" a session, might have been kmai007's
18:12 Joe630 joined #gluster
18:13 NuxRo was crazy day, updating the damn thing and restarting everything, even VPNs had to go down momentarily
18:15 John___ that is has been
18:19 an joined #gluster
18:27 ctria joined #gluster
18:31 stupidnic left #gluster
18:32 johnbot1_ joined #gluster
18:34 glusterbot joined #gluster
18:43 uebera|| joined #gluster
18:43 uebera|| joined #gluster
18:55 rakkaus_ joined #gluster
18:56 rakkaus_ hi guys! I need to install glusterfs 3.3.1-15.el6 on my CentOS 6.5
18:56 rakkaus_ how can I do that with yum ?
18:56 rakkaus_ I have added glusterfs-epel
18:57 John___ You would have to hardcode the base url path to that direcotry
18:57 John___ baseurl=http://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.repo/epel-$releasever/SRPMS
18:57 rakkaus_ ah there is only latest
18:57 dbruhn yum install glusterfs glusterfs-fuse glusterfs-server
18:58 John___ that would ahve to become baseurl=http://download.gluster.org/pub/gluster/glusterfs/3.3/3.3.1/EPEL.repo/epel-$releasever/SRPMS
18:58 dbruhn agh yeah, you need to add the epel for the correct version
18:58 rakkaus_ okay! thanks!
18:59 John___ np
18:59 chirino joined #gluster
19:01 John___ Actually, rakkaus, just download the http://download.gluster.org/pub/gluster/glusterfs/3.3/3.3.1/EPEL.repo/glusterfs-epel.repo
19:01 John___ that should give you want you want
19:03 jag3773 joined #gluster
19:04 glusterbot New news from resolvedglusterbugs: [Bug 761770] ls: Input/Output Error on client in mounted volume 64bit kernel/ 32bit userland <https://bugzilla.redhat.com/show_bug.cgi?id=761770> || [Bug 1081605] glusterd should not create zombies if xfsprogs or e2fsprogs are missing <https://bugzilla.redhat.com/show_bug.cgi?id=1081605>
19:04 glusterbot New news from newglusterbugs: [Bug 1085425] Input/Output Errors with 64bit Server and 32bit client <https://bugzilla.redhat.com/show_bug.cgi?id=1085425> || [Bug 1084175] tests/bugs/bug-861542.t needs to be more robust. It's failing on long hostnames. <https://bugzilla.redhat.com/show_bug.cgi?id=1084175> || [Bug 1084964] SMB: CIFS mount fails with the latest glusterfs rpm's <https://bugzilla.redhat.com/show_bug.cgi?id=10849
19:06 nightwalk joined #gluster
19:08 pk joined #gluster
19:09 an joined #gluster
19:10 JonathanD joined #gluster
19:11 P0w3r3d joined #gluster
19:12 kmai007 NuxRo: what session? i was away
19:14 semiosis kmai007: exploiting the heartbleed vuln on gluster.org
19:14 semiosis the server gave up a user's session id
19:14 kmai007 yikes, what should i do?
19:15 semiosis meh
19:15 semiosis it's a public wiki, why would anyone bother hijacking a session?
19:15 * semiosis not too worried about it
19:16 kmai007 oh so its the webservers, not the gluster storage nodes itself?
19:16 John___ correct
19:16 kmai007 ty
19:18 JoeJulian But you could get storage nodes to give up their ssl keys if you're using that feature.
19:18 John___ btw. I hate to be "that guy" but it appears the updated respond.xml for el5 is acting up. "  glusterfs-geo-replication-3.4.3-2.el5.x86_64: failed to retrieve glusterfs-geo-replication-3.4.3-2.el5.x86_64.rpm from glusterfs-epel error was [Errno 2] Local file does not exist: /root/pdate/glusterfs-geo-replication-3.4.3-2.el5.x86_64.rpm"
19:20 John___ err.. repmod.xml
19:21 kkeithley_ ndevos, John___: grrrr.  I've signed the el5 rpms on a CentOS 5.10 box, now with V3 sig but still getting signature: BAD, key ID 4ab22bb3.
19:21 John___ wierd
19:21 kkeithley_ wtf
19:22 shubhendu joined #gluster
19:22 kkeithley_ and I've explicitly imported the pubkey as well as using a repo file with gpgkey=...
19:26 kkeithley_ at this point I either put unsigned el5 rpms up, which is what I used to do until a couple weeks ago when someone complained. Or put signed rpms up and tell people to use --nogpgcheck to install
19:27 John___ probably do the --nogpgcheck
19:27 John___ it seems like some people can update fine?
19:28 nage joined #gluster
19:29 kkeithley_ original signed packages are back. On my $%^&* centos box yum is now borked and wont' download.
19:31 John___ Same here
19:31 John___ Local file does not exist: /root/test/pdate/glusterfs-geo-replication-3.4.3-2.el5.x86_64.rpm
19:31 Pavid7 joined #gluster
19:31 John___ might be the line in repmod.xml
19:31 kkeithley_ yeah, I get the same thing
19:31 John___ <location xml:base="pdate" href="repodata/3cfd7223ba2a6d98854dd8100a730880-filelists.sqlite.bz2"/>
19:32 kkeithley_ I can try creating new repo files instead of -update
19:32 kkeithley_ let's see
19:32 John___ lol
19:33 John___ probably the -update took "pdate" as a definition to the -u option?
19:34 kkeithley_ ah, that's better
19:34 kkeithley_ so much for getting my real work done today
19:35 John___ error: glusterfs-libs-3.4.3-2.el5: Header V4 RSA/SHA1 signature: BAD, key ID 4ab22bb3 SOB, even with --nogpgcheck
19:36 kkeithley_ but they install anyway
19:36 kkeithley_ or not
19:36 John___ glusterfs-3.4.2-1.el5
19:36 John___ nope
19:36 John___ rpm -qa still shows -1
19:37 kkeithley_ says it installed, but didn't
19:37 John___ correct
19:37 B21956 joined #gluster
19:39 kkeithley_ I'm this >< close to removing the sigs
19:39 John___ But you're able to replicate it with your CentOS 5.10 build?
19:41 kkeithley_ replicate what?
19:41 John___ Same key errors I'm getting
19:41 kkeithley_ yup
19:42 kkeithley_ I signed on the centos5.10 box, got a V3 sig, but same bad key ID error.
19:43 kkeithley_ I wonder if it's the key size.
19:43 John___ That could be.
19:43 ultrabizweb joined #gluster
19:44 John___ The CentOS 5 is Algorithm  : DSA (Digital Signature Standard)
19:44 John___ Gluser is: Algorithm  : RSA (Encrypt or Sign)
19:44 kkeithley_ wait, ISTR making the glusterpackager's key smaller because some things were balking on my personal key, which is big
19:45 kkeithley_ oh
19:45 lpabon joined #gluster
19:45 Matthaeus joined #gluster
19:46 * kkeithley_ needs to learn more about pub keys and signing
19:47 pk joined #gluster
19:47 pk left #gluster
19:48 lmickh joined #gluster
19:55 rwheeler joined #gluster
19:56 Philambdo joined #gluster
20:10 andreask joined #gluster
20:16 purpleidea kkeithley_: the NSA has influenced the process enough to make it complicated on purpose :P
20:17 purpleidea the proof is tha
20:17 purpleidea [transmission cut off]
20:20 kkeithley_ how droll. No beer for you next week
20:20 kkeithley_ ;-)
20:21 lalatenduM purpleidea, lol :)
20:21 purpleidea :(
20:23 jayunit100 joined #gluster
20:29 kkeithley_ ah, you're back, despite the NSA's best efforts
20:32 elico joined #gluster
20:39 andrewklau joined #gluster
20:50 jag3773 joined #gluster
20:55 ccha joined #gluster
21:05 an joined #gluster
21:09 calum_ joined #gluster
21:24 tdasilva left #gluster
21:24 social kkeithley_: x509 & company?
21:25 social kkeithley_: it's complicated, bloated and completly bugous and if there is something sane in protocols than the implementation sucks =[
21:28 primechuck joined #gluster
21:30 Ark joined #gluster
21:49 nightwalk joined #gluster
21:55 pjschmitt joined #gluster
22:00 fidevo joined #gluster
22:05 nightwalk joined #gluster
22:09 asku joined #gluster
22:12 theron joined #gluster
22:13 theron joined #gluster
22:18 kkeithley_ social: ??
22:32 asku Noticed that http://download.gluster.org/pub/gluster/glusterfs/3.4/LATEST/Debian/ is empty.  Is this expected?
22:32 glusterbot Title: Index of /pub/gluster/glusterfs/3.4/LATEST/Debian (at download.gluster.org)
22:48 semiosis asku: sorry i have been super busy lately & didnt get around to building the wheezy debs.  i will do it tonight.  thanks for being patient
22:50 asku np, was just curious.  doing lots of upgrades thanks to heartbleed and 404's kept popping up.
22:50 asku thanks
23:01 shyam joined #gluster
23:06 semiosis oops
23:06 semiosis yw
23:21 dtrainor joined #gluster
23:40 plarsen joined #gluster
23:45 doekia joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary