Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2013-11-15

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:20 cyberbootje joined #gluster
00:28 cyberbootje joined #gluster
00:37 cyberbootje joined #gluster
00:42 cyberbootje joined #gluster
00:51 cyberbootje joined #gluster
01:26 davidbierce joined #gluster
01:34 harish_ joined #gluster
02:07 zwu joined #gluster
02:18 recidive joined #gluster
02:19 cyberbootje joined #gluster
02:30 harish_ joined #gluster
02:47 nueces joined #gluster
02:53 hagarth joined #gluster
03:02 bharata-rao joined #gluster
03:08 sgowda joined #gluster
03:14 kPb_in joined #gluster
03:19 glusterbot New news from newglusterbugs: [Bug 1029482] AFR: cannot get volume status when one node down <http://goo.gl/VFYYJ4>
03:36 Shdwdrgn joined #gluster
03:39 shubhendu joined #gluster
03:41 itisravi joined #gluster
03:45 tyl0r joined #gluster
03:47 raar joined #gluster
03:51 ingard__ joined #gluster
03:52 fagiani joined #gluster
03:52 fagiani joined #gluster
03:53 fagiani hi, anybody that can give me a hint with how to move files from one volume to other so I can free space?
03:55 ndarshan joined #gluster
03:56 spandit joined #gluster
03:58 RameshN joined #gluster
04:00 fagiani hi, anybody that can give me a hint with how to move files from one volume to other so I can free space?
04:03 mohankumar__ joined #gluster
04:04 hagarth joined #gluster
04:06 Debolaz joined #gluster
04:10 RameshN joined #gluster
04:10 kPb_in joined #gluster
04:22 vpshastry joined #gluster
04:33 fagiani hi, anybody that can give me a hint with how to move files from one volume to other so I can free space?
04:35 sghosh joined #gluster
04:37 sghosh joined #gluster
04:45 vpshastry left #gluster
04:45 dusmant joined #gluster
04:49 kanagaraj joined #gluster
04:53 jag3773 joined #gluster
04:54 bala joined #gluster
05:02 ricky-ticky joined #gluster
05:02 kshlm joined #gluster
05:05 nasso joined #gluster
05:07 fagiani hi, anybody that can give me a hint with how to move files from one volume to other so I can free space?
05:09 samppah fagiani: can you please tell more about what you want to do?
05:18 saurabh joined #gluster
05:19 shruti joined #gluster
05:20 ppai joined #gluster
05:24 psharma joined #gluster
05:26 chirino joined #gluster
05:27 meghanam joined #gluster
05:29 rastar joined #gluster
05:32 ababu joined #gluster
05:33 shylesh joined #gluster
05:35 CheRi joined #gluster
05:51 mohankumar__ joined #gluster
05:56 jfield joined #gluster
05:58 vpshastry joined #gluster
05:59 vpshastry left #gluster
06:02 lalatenduM joined #gluster
06:09 samppah is there any documentation about rhs client and glusterfs server compatibility?
06:10 samppah i understand that it might not be red hat supported setup but is it completeldy "don't do that at all" setup
06:10 samppah afaik, rhs has some patches that might not be released in upstream yet
06:11 ricky-ticky joined #gluster
06:18 nueces joined #gluster
06:40 dasfda joined #gluster
06:44 vimal joined #gluster
06:50 glusterbot New news from newglusterbugs: [Bug 1023667] The Python libgfapi API needs more fops <http://goo.gl/vxi0Zq>
06:54 hagarth samppah: more appropriate question for the other rhs (red hat support) ;)
06:54 hagarth samppah: but the overall intention is to provide N+1 compatibility afaik
06:55 samppah hagarth: N+1 compatibility? :)
06:56 hagarth as in 2.0 to be compatible with 2.1 and so on
06:57 samppah ah, okay
06:57 nshaikh joined #gluster
07:00 samppah hagarth: thanks, i'll contact red hat representative to see what they say :)
07:02 hagarth samppah: np
07:10 jtux joined #gluster
07:11 davidbierce joined #gluster
07:12 ricky-ticky joined #gluster
07:17 ngoswami joined #gluster
07:19 hagarth @channelstats
07:19 glusterbot hagarth: On #gluster there have been 206425 messages, containing 8529604 characters, 1421015 words, 5542 smileys, and 754 frowns; 1232 of those messages were ACTIONs. There have been 82901 joins, 2565 parts, 80351 quits, 23 kicks, 173 mode changes, and 7 topic changes. There are currently 202 users and the channel has peaked at 239 users.
07:20 DV joined #gluster
07:23 ricky-ticky joined #gluster
07:24 anonymus joined #gluster
07:24 anonymus hi guys
07:24 anonymus help me please
07:25 anonymus gluster volume geo-replication myvol server2:/data/myvol_slave start
07:25 anonymus makes 'unrecognized word: geo-replication (position 1)'
07:25 anonymus error
07:25 anonymus when I'm doing this http://gluster.org/community/documentation/index.php/HowTo:geo-replication
07:25 glusterbot <http://goo.gl/tS2cXH> (at gluster.org)
07:26 anonymus I am using 3.4 glusterfs
07:27 anonymus on fedora 19
07:28 samppah anonymus: did you install glusterfs-geo-replication package?
07:29 anonymus samppah:  nope :(
07:29 anonymus already looking for it
07:33 anonymus samppah: !!! Starting geo-replication session between myvol & server2:/data/myvol_slave has been successful
07:33 anonymus samppah:  thank you very much!!
07:52 raar joined #gluster
07:57 ctria joined #gluster
08:05 eseyman joined #gluster
08:09 anonymus samppah:
08:10 getup- joined #gluster
08:11 anonymus why du -hs utility shows 0 size for the replicated file on slave node?
08:11 anonymus root@fedora2# du -hs /data/myvol_slave/temp.iso                                                                           ~
08:11 anonymus 0    /data/myvol_slave/temp.iso
08:11 anonymus root@fedora1# du -hs /data/gv0/brick2/temp.iso                                                                  ~
08:11 anonymus 25M    /data/gv0/brick2/temp.iso
08:12 samppah anonymus: what doesn ls -l say?
08:13 anonymus -rw-r--r-- 1 root root 25165824 Nov 15 03:03 /data/myvol_slave/temp.iso
08:13 anonymus stat shows that size is 25 mb
08:14 anonymus and blocks = 0
08:14 samppah can you do md5sum?
08:14 anonymus moment
08:14 anonymus they are identical
08:15 anonymus on both nodes
08:15 samppah ok, good.. iirc i have been wondering same thing with du.. i think it must because of hardlinking with file in .glusterfs directory
08:16 anonymus is it ok?
08:17 anonymus I mean if the connection between master and slave would break
08:17 samppah anonymus: is this over georeplication or normal?
08:17 anonymus will it be possible to read slave's file
08:18 anonymus over geo-
08:18 anonymus over normal everything seems to be ok
08:18 samppah ah, i don't have geo-replication setup available right now to do testing
08:18 samppah but it definetly should be able to read slave
08:18 samppah and if md5sum matches i'd say everything is fine
08:19 anonymus hmm. i will try to shut down master and make md5sum again
08:19 anonymus yes/ it is ok
08:19 anonymus thank you again, samppah
08:20 anonymus as usual you are very helpful
08:20 samppah anonymus: no problem :)
08:29 hybrid5121 joined #gluster
08:40 geewiz joined #gluster
08:55 rastar joined #gluster
09:11 Guest18507 joined #gluster
09:11 davidbierce joined #gluster
09:15 ndarshan joined #gluster
09:22 bgpepi joined #gluster
09:23 eseyman joined #gluster
09:24 andreask joined #gluster
09:34 marbu joined #gluster
09:40 ngoswami joined #gluster
09:52 getup- joined #gluster
09:52 anonymus left #gluster
09:57 vshankar joined #gluster
10:21 cyberbootje joined #gluster
10:44 andreask joined #gluster
10:48 rastar joined #gluster
10:53 getup- joined #gluster
10:56 calum_ joined #gluster
10:59 harish_ joined #gluster
11:00 hybrid5121 joined #gluster
11:01 RedShift joined #gluster
11:10 getup- whats generally the best way to verify if what gluster does is in sync apart from an md5sum on a single file? is there some other monitoring that can be done?
11:12 davidbierce joined #gluster
11:18 ababu joined #gluster
11:33 itisravi_ joined #gluster
11:40 rastar joined #gluster
11:46 diegows joined #gluster
11:59 diego joined #gluster
12:00 getup- joined #gluster
12:01 Guest9312 Guys I have lost gluster config files so I tried to recreate my volumes without losing data but when I try to start the volume I have an error regarding volume id mismatch any ideas?
12:14 diegol__ Removing extended attributes from brick directory should be the way to go?
12:24 vpshastry joined #gluster
12:24 vpshastry left #gluster
12:29 getup- joined #gluster
12:44 ctria joined #gluster
12:55 DV joined #gluster
12:55 andreask joined #gluster
13:05 B21956 joined #gluster
13:10 ndarshan joined #gluster
13:14 ngoswami joined #gluster
13:17 ipvelez joined #gluster
13:17 ipvelez hello good morning
13:17 ipvelez I'm having some issued with my first Gluster installation, and I don't know what else to do
13:18 samppah good afternoon ipvelez
13:18 ipvelez I set up 2 servers with a replicated volume
13:19 ipvelez and was able to mount the volume and write to it, however it only wrote files to the first server and did not replicated to the second
13:20 bala joined #gluster
13:20 samppah ipvelez: so you did mount -t glusterfs server:/volname /mnt/volname and wrote files to /mnt/volname, right?
13:21 ipvelez yes
13:21 samppah what glusterfs version you are using and what distribution?
13:22 ipvelez I am using version 3.2.7 on an two Amazon servers with Amazon Linuz AMI
13:24 getup- joined #gluster
13:24 samppah ipvelez: anything in log files?
13:24 ipvelez and once I wrote to the /mnt/volname, I could see file on the brick on one of the servers but not on the brick of the other server
13:24 ira joined #gluster
13:25 ipvelez no much, I see some lines that say FAILES but are not helping
13:25 ipvelez FAILED* i mean
13:25 samppah can you sent it to pastie.org?
13:28 ipvelez ok, let me get it
13:32 ipvelez which of the log files contents do you want to see?
13:33 samppah client log, it's usually /var/log/glusterfs/mntpnt.log
13:34 rastar joined #gluster
13:39 ctria joined #gluster
13:41 getup- joined #gluster
13:42 ipvelez http://pastie.org/private/btvta7lsgwyojtl5dwa
13:42 glusterbot Title: Private Paste - Pastie (at pastie.org)
13:46 nasso joined #gluster
13:50 samppah ipvelez: it's not able to connect to another host
13:50 samppah [2013-11-11 11:06:27.325482] E [name.c:253:af_inet_client_get_remote_sockaddr] 0-axio-volume-client-1: DNS resolution failed on host ec2-54-204-186-132.compute-1.amazonaws.com
13:50 samppah [2013-11-11 11:06:41.696271] E [common-utils.c:125:gf_resolve_ip6] 0-resolver: getaddrinfo failed (Name or service not known)
13:50 ipvelez oh
13:51 ipvelez and how was that it was able to add it to the volume?
13:52 samppah you are using client and server on same node?
13:53 ipvelez on one of the server, yes
13:54 samppah can you check if dns resolution succeeds with host or dig? and are you able to ping it also?
13:54 vpshastry joined #gluster
13:56 hchiramm_ joined #gluster
13:57 giannello joined #gluster
13:57 ipvelez you are right, it does not find it
14:03 jskinner_ joined #gluster
14:03 diegol__ Guys I have lost gluster config files so I tried to recreate my volumes without losing data but when I try to start the volume I have an error regarding volume id mismatch any ideas? I have read that I can remove the extended attributes but not sure :-S
14:06 skered- The issue I was having yesterday with a Tru64 DEC Alpha machine connecting to a GlusterFS NFS export.  mountd only runs on tcp on GlusterFS.  It appears GlusterFS only runs mountd on tcp
14:08 hybrid5121 joined #gluster
14:08 mbukatov joined #gluster
14:12 plarsen joined #gluster
14:14 tqrst does fix-layout happen independently on every brick, or is there any communication involved?
14:15 tqrst 'rebalance status' is showing that there is a rebalance in progress on all servers except on one which says "completed" as of ~4-5 hours ago, so I'm guessing it's independent at the server level?
14:22 hchiramm_ joined #gluster
14:27 tjikkun_work joined #gluster
14:28 davidbierce joined #gluster
14:29 vpshastry left #gluster
14:42 andreask joined #gluster
14:44 samsamm joined #gluster
14:53 semiosis samppah: what distro?
14:54 semiosis oops, never mind
14:54 semiosis not your system
14:54 * semiosis reads further back
14:54 failshell joined #gluster
14:56 semiosis ipvelez: check that your ec2 security group allows the glusterfs ,,(ports)
14:56 glusterbot ipvelez: glusterd's management port is 24007/tcp and 24008/tcp if you use rdma. Bricks (glusterfsd) use 24009 & up for <3.4 and 49152 & up for 3.4. (Deleted volumes do not reset this counter.) Additionally it will listen on 38465-38467/tcp for nfs, also 38468 for NLM since 3.3.0. NFS also depends on rpcbind/portmap on port 111 and 2049 since 3.4.
14:56 semiosis ipvelez: also you should upgrade to the ,,(latest) version
14:56 glusterbot ipvelez: The latest version is available at http://goo.gl/zO0Fa . There is a .repo file for yum or see @ppa for ubuntu.
14:56 semiosis 3.2 is ooold
14:56 * tqrst shudders at the memory
14:56 semiosis ipvelez: finally, i see you used local-ipv4 in a brick address -- that is likely to change so it's advisable to use hostnames
14:57 bennyturns joined #gluster
14:57 ipvelez ok, thanks for the input, I will check the security group!
14:58 ipvelez and will try to install latest version
14:58 tqrst is there any speed difference between doing a fix-layout before rebalancing as opposed to only rebalancing (which involves an implicit fix-layout)?
14:59 tqrst the answer seems to be yes (http://www.mail-archive.com/gluster-users@gluster.org/msg11478.html)
14:59 glusterbot <http://goo.gl/KcFOlJ> (at www.mail-archive.com)
15:01 bugs_ joined #gluster
15:05 zerick joined #gluster
15:11 jmeeuwen_ joined #gluster
15:12 zwu joined #gluster
15:12 jbautista|brb joined #gluster
15:12 sghosh joined #gluster
15:18 brimstone i found the rot-13 (heh) translator, but are there any other encryption based translators?
15:23 kkeithley the rpc socket xlator will use ssl encryption if you enable it.
15:23 glusterbot New news from newglusterbugs: [Bug 1016482] Owner of some directories become root <http://goo.gl/fJd8d3>
15:26 shubhendu joined #gluster
15:30 getup- hi, although the time on the server is right it seems that gluster's logging is still an hour behind, something i missed?
15:31 hateya joined #gluster
15:32 tqrst getup-: it seems like logs are in GMT0
15:33 tqrst it's a feature, apparently
15:33 tqrst https://bugzilla.redhat.com/show_bug.cgi?id=958062
15:33 glusterbot <http://goo.gl/kcQ8hp> (at bugzilla.redhat.com)
15:33 glusterbot Bug 958062: high, medium, ---, rhs-bugs, ASSIGNED , logging: date mismatch in glusterfs logging and system "date"
15:33 tqrst s/GMT0/UTC
15:34 getup- in a way that makes sense
15:34 tqrst I'm guessing it's because there are no weird time savings adjustments
15:34 tqrst kkeithley: any other reason?
15:35 tqrst ("as any good appliance the time should be set to UTC, and the logs"
15:35 tqrst 0
15:35 tqrst )
15:36 getup- probably
15:39 semiosis "server time"
15:39 social joined #gluster
15:41 brimstone kkeithley: i'm looking for one that encrypts the files
15:41 getup- tqrst: thanks
15:43 vpshastry joined #gluster
15:47 social hmm geo-replication throws malformed url on hostname containing capital and non capital letters like EXAMPLE-1.example.com
15:47 spechal_ joined #gluster
15:47 vpshastry left #gluster
15:49 semiosis pretty sure hostnames are supposed to be lower case
15:49 social semiosis: it should be case insensitive imho, dunno which rfc says so
15:50 semiosis i see
15:50 social Domain names may be formed from the set of alphanumeric ASCII characters (a-z, A-Z, 0-9), but characters are case-insensitive. In addition the hyphen is permitted if it is surrounded by a characters or digits, i.e., it is not the start or end of a label. Labels are always separated by the full stop (period) character in the textual name representation.
15:50 social fom wiki :/
15:51 semiosis well file a bug i guess
15:51 glusterbot http://goo.gl/UUuCq
15:52 semiosis if you're certain that's the issue & it's clearly reproducible
15:52 social I'm already on it :)
15:52 social but wasn't sure for while if it's case insensitive
15:54 semiosis http://en.wikipedia.org/wiki/Hostname#Restrictions_on_valid_host_names
15:54 glusterbot <http://goo.gl/V5Ye9> (at en.wikipedia.org)
15:54 semiosis agrees with you
15:58 esalexa|wfh doing a POC with RHS2.1 (gluster 3.4).  accessing a distributed volume via NFS (from multiple mac/osx).  users claiming a limit of 4 simultaneous data writes.  any ideas why this would be the case?
16:02 lbalbalba joined #gluster
16:02 lpabon joined #gluster
16:03 semiosis esalexa|wfh: have you tried using another type of nfs server?  a mac or linux kernel nfs server?
16:04 lbalbalba dns names are case insensitive. try 'ping WWW.GOOGLE.COM', or even 'ping WwW.GoOgLe.CoM'
16:05 esalexa|wfh semiosis: they write via nfsv3 to a netapp filer without any such limits
16:06 semiosis hmm
16:06 esalexa|wfh i'm just starting to look into this, but didn't find anything relevant in gluster/rhs docs.
16:07 lbalbalba just curious: how do the users determine this limit of 4 simultaneous writes ?
16:09 esalexa|wfh i haven't sat with them yet, but seems that with 4 writes going on, others wait for others to complete.
16:09 esalexa|wfh i need to get more info from them.  right now, it's a fishing expedition
16:11 lbalbalba perhaps you should ask for the way they produce the situation, so that you can reproduce and test / troubleshoot on your own.
16:12 daMaestro joined #gluster
16:20 sprachgenerator joined #gluster
16:23 zaitcev joined #gluster
16:24 glusterbot New news from newglusterbugs: [Bug 1031107] georeplication fails with uppercase letters in hostname <http://goo.gl/Ofkumd>
16:26 lbalbalba yeah thats a bug allright. dns names are case insensitive. see rfc4343: http://tools.ietf.org/html/rfc4343#page-2
16:26 glusterbot Title: RFC 4343 - Domain Name System (DNS) Case Insensitivity Clarification (at tools.ietf.org)
16:34 esalexa|wfh lbalbalba: thanks.  i'll be doing some testing myself and be back here if confirmed
16:35 lbalbalba esalexa|wfh: no problem. good luck! im really really curious about this one. :)
16:39 Liquid-- joined #gluster
16:42 Liquid-- Hello, I'm trying to setup a gluster volume on top of ZFS. I have a mount called /zfs/nova-instances that I'm trying to put Gluster on top of. However I'm getting this error: 'volume create: glance-images: failed: Host 192.168.11.61 is not in 'Peer in Cluster' state'
16:43 lbalbalba just curious: did you remember to 'gluster peer probe myserver' from another node thats in the cluster ?
16:43 Liquid-- I have two vm's of which 192.168.11.61 is a part of. When i run 'gluster peer probe 192.168.11.61' it runs successfully...So I'm not sure why it would be in 'Peer in Cluster' state. Any ideas? :)
16:44 Liquid-- yes that works. However, I find this very weird:
16:44 Liquid-- gluster> peer status
16:44 Liquid-- Number of Peers: 1
16:44 Liquid-- Hostname: 192.168.11.60
16:44 Liquid-- Uuid: 06ac6d4c-51ed-4b60-be57-c25668552143
16:44 Liquid-- State: Peer in Cluster (Disconnected)
16:44 Liquid-- I'm not sure why my peer probe would succeed but my peer status shows some other node in a disconnected state.
16:45 JoeJulian what does peer status show on .60?
16:46 Liquid-- well it's weird because I don't even have a node on .60 My node is on .61 :?
16:46 lbalbalba hrm. did you run 'gluster peer probe 192.168.11.61' when logged onto 192.168.11.60 ? is the output you show of 'peer status' from the cvmdline of 192.168.11.61' or 192.168.11.60 ?
16:46 lbalbalba oh. wait nvrmind
16:47 Liquid-- So I have 2 vm's one is 192.168.11.103 and the other is 192.168.11.61
16:47 Liquid-- I'm not even sure where it's getting .60 :/
16:48 JoeJulian The only way would have been for you to have (successfully) probed it.
16:48 lbalbalba kay. so when you log onto 192.168.11.103 you run 'gluster peer probe 192.168.11.61' , and then you get the output you posted from peer info
16:49 Liquid-- on 192.168.11.103:
16:49 Liquid-- gluster peer probe 192.168.11.61
16:49 Liquid-- peer probe: success: host 192.168.11.61 port 24007 already in peer list
16:49 JoeJulian I'm guessing one of your VMs was previously .60 and after you stopped and started it it was assigned a different ip address for whatever reason.
16:50 JoeJulian One of the reasons we recommend ,,(hostnames)
16:50 glusterbot Hostnames can be used instead of IPs for server (peer) addresses. To update an existing peer's address from IP to hostname, just probe it by name from any other peer. When creating a new pool, probe all other servers by name from the first, then probe the first by name from just one of the others.
16:50 Liquid-- hmm That's possible, but If i'm probing .61 then why is it not in my list when I run 'peer status' from command line?
16:51 JoeJulian peer status shows the peers of the server you're querying, not itself.
16:51 lbalbalba yeah, what was that cmd that shows both itself and the peers again ?
16:52 lbalbalba 'peer info' ?
16:53 JoeJulian I think that's coming in 3.5
16:53 Liquid-- on 192.168.11.103:
16:53 Liquid-- gluster> peer status
16:53 Liquid-- Number of Peers: 1
16:53 Liquid-- Hostname: 192.168.11.60
16:53 Liquid-- Uuid: 06ac6d4c-51ed-4b60-be57-c25668552143
16:53 Liquid-- State: Peer in Cluster (Disconnected)
16:54 JoeJulian Makes sense.
16:54 Liquid-- both vm's /etc/hosts has openstack-controller 192.168.11.103 & openstack-computenode 192.168.11.61
16:55 JoeJulian hostnames won't matter since you don't use them.
16:55 lbalbalba JoeJulian: i used that in (if i remember correctly) 3.3 or 3.4 -ish git
16:56 Liquid-- so do I need to remove 192.168.11.60 from the cluster list? That's not even a node on my network :/ I'm wondering how I get 192.168.11.61 on that list and in Peer in Cluster (connected) status
16:57 lbalbalba Liquid--: removing 192.168.11.60 might help. it shouldnt hurt if its not part of the cluster anyway, right ? ;)
16:59 tyl0r joined #gluster
17:00 Liquid-- lbalbalba: Mmkay, I might be misunderstanding. But shouldn't bluster peer probe 192.168.11.61 bring that node into the cluster?
17:00 Liquid-- because the peer probe is successful but I don't see it when I run peer status from the bluster CLI
17:02 lbalbalba JoeJulian: ah got it. its 'gluster pool list'
17:02 tyl0r joined #gluster
17:02 JoeJulian Hah! 103 and 61 both show the peer as being .60? I'm guessing you created a pool, imaged a server, then created two servers from that image.
17:03 JoeJulian Ooh, v3.5.0qa1 is tagged.
17:04 Liquid-- lol well at least .61 knows whats really going on: gluster> peer status
17:04 Liquid-- Number of Peers: 1
17:04 Liquid-- Hostname: 192.168.11.103
17:04 Liquid-- Uuid: 6aa2948c-97ff-4979-a02f-8d891699f719
17:04 Liquid-- State: Peer in Cluster (Connected)
17:04 aliguori joined #gluster
17:06 JoeJulian have you created any volumes yet?
17:06 jskinner_ joined #gluster
17:09 Liquid-- bluster volume list show no volumes. I just ran 'gluster peer detach 192.168.11.60' and then probed .61 - now it shows up in the list :)
17:09 lbalbalba cool
17:09 semiosis @qa releases
17:09 glusterbot semiosis: The QA releases are available at http://bits.gluster.com/pub/gluster/glusterfs/ -- RPMs in the version folders and source archives for all versions under src/
17:11 plarsen joined #gluster
17:11 lbalbalba hrm. about the preference of using the hostname instead of the ip addr
17:11 lbalbalba this page says at the bottom that ips are stored internally anyway instead of the hostname ? : http://gluster.org/community/documentation/index.php/Gluster_3.2:_Adding_Servers_to_Trusted_Storage_Pool
17:11 glusterbot <http://goo.gl/k1DjO5> (at gluster.org)
17:11 jag3773 joined #gluster
17:11 lbalbalba is that true ?
17:12 lbalbalba Liquid--: guess JoeJulian is right in that you managed to confuse gluster with ip addr changes or cloning vm's someting
17:12 JoeJulian lbalbalba: Look in /var/lib/glusterd/peers
17:13 semiosis lbalbalba: ,,(hostnames)
17:13 glusterbot lbalbalba: Hostnames can be used instead of IPs for server (peer) addresses. To update an existing peer's address from IP to hostname, just probe it by name from any other peer. When creating a new pool, probe all other servers by name from the first, then probe the first by name from just one of the others.
17:13 * JoeJulian said that already... ;)
17:13 semiosis worth saying again
17:14 JoeJulian +1
17:14 Liquid-- hmm still weird. I still get this error:gluster volume create nova-instances replica 2 192.168.11.103:/zfs/nova-instances 192.168.11.61:/zfs/nova-instances && gluster volume start nova-instances
17:14 Liquid-- volume create: nova-instances: failed: /zfs/nova-instances or a prefix of it is already part of a volume
17:14 glusterbot Liquid--: To clear that error, follow the instructions at http://goo.gl/YUzrh or see this bug http://goo.gl/YZi8Y
17:14 Liquid-- when I run Gluster volume list...it show no volumes.
17:15 JoeJulian Have you put glusterbot on ignore? It already gave you the answer.
17:16 semiosis lbalbalba: fixed wiki page
17:16 Liquid-- I've actually run across that article in the past. It did not work for me :/
17:16 JoeJulian Then you didn't follow it.
17:16 semiosis happens so often
17:17 lbalbalba semiosis: thanks. still think its funny you need to 'probe once in the reverse direction, using the hostname of the original server' in order tyo have the 'original' server stored with hostname instead of ip.
17:18 semiosis rotflol
17:18 skered- ok so Tru64 DEC Alpha machines are now mounting GlusterFS NFS!
17:18 skered- However, I see some odd behavior
17:18 JoeJulian skered-: cool, what was it?
17:18 semiosis i see odd behavior too :O
17:19 skered- JoeJulian: Had to turn on nfs.mount-udp
17:19 JoeJulian that exists?
17:19 skered- as of a year ago yes
17:19 skered- http://review.gluster.com/#/c/3337/4/xlators/nfs/server/src/nfs.c
17:19 glusterbot <http://goo.gl/2YtoUJ> (at review.gluster.com)
17:19 semiosis how'd you find that?!
17:19 skered- and it's in the 3.4 branch
17:20 skered- http://www.gluster.org/pipermail/gluster-users/2012-May/033234.html
17:20 glusterbot <http://goo.gl/dmqmUk> (at www.gluster.org)
17:20 skered- That
17:20 semiosis wow
17:20 semiosis good find!
17:20 JoeJulian I think it's fascinating that I can learn new stuff about this software every week.
17:20 skered- However, the issue I have now is that I can't mount dirs inside the volume
17:20 semiosis doubt that is possible
17:20 skered- It is
17:20 semiosis with gluster nfs?
17:20 skered- It's NFS so i would expect that to work
17:20 JoeJulian I'm pretty sure that's a known issue.
17:21 skered- JoeJulian: No, it works fine outside of Tru64
17:21 jskinner joined #gluster
17:21 skered- ex: mount gluser:/volume/path/inside/volume /mount/point
17:21 skered- That works find on Linux
17:21 skered- However in Tru64 it doesn't
17:22 JoeJulian hmm, didn't used to... cool.
17:22 skered- I have to do: mount gluster:/volume /mount/point
17:22 skered- HOWEVER! I can't cd insdie /mount/point/...
17:22 skered- The first time I cd to a dir I get "Permission Denied" then cd again everything is fine
17:23 skered- For every directory
17:23 JoeJulian I tell you one other thing I've learned... I'm going to stick with Linux and avoid Tru64.
17:23 lbalbalba just curious: what are the permissions and user/group of the unmounted mountpoint '/mount/point' ?
17:23 skered- 755
17:23 JoeJulian skered-: Check /var/log/glusterfs/nfs.log?
17:24 skered- The permissions appear to be correct
17:24 lbalbalba kay
17:24 skered- JoeJulian: Yeah, I plan on looking at that in a bit
17:25 JoeJulian If it fails once then works the next, it's definitely a bug. Just not sure if it's gluster's or Tru64's.
17:25 JoeJulian Since UDP support has never been officially announced, I'm leaning toward it being a bug in there.
17:26 skered- Well that's UDP only for mountd
17:26 skered- NFS should be using TCP
17:26 lbalbalba skered-: hrm. you running the cd cmds as trhe root user on the Tru64 box ?
17:26 skered- Yes
17:26 lbalbalba root name squashing bug ?
17:26 lbalbalba try other user ?
17:27 lbalbalba su - nobody ?
17:27 skered- root squash is turned off (and root every else works) but I can try it
17:27 JoeJulian It wouldn't work the second time if it was a permissions issue.
17:27 skered- However, I think it's and issue because I cna't mount dirs inside a volume (see above)
17:28 lbalbalba JoeJulian: it shouldnt work the 2nd time if its a bug ;)
17:29 JoeJulian On the contrary. It shouldn't work the 2nd time if it's a valid permissions issue.
17:29 lbalbalba JoeJulian: so i guess the code magically changes between the 1st and the 2nd tries ;)
17:30 skered- Well if you look at the packets gluster isn't returning anything
17:30 skered- Just an empty packet
17:30 skered- er well no payload
17:31 JoeJulian Definitely check that log file then.
17:33 skered- Nothing is written to the logs
17:33 jesse joined #gluster
17:34 Mo___ joined #gluster
17:36 JoeJulian Wierd. Just reading through the code, every NFS3ERR code should produce a warning in the log.
17:37 JoeJulian I guess that implies that gluster-nfs isn't sending an error...
17:37 Liquid-- You guys rock by the way, thanks for all of the help :)
17:38 JoeJulian You're welcome. :D
17:38 skered- JoeJulian: You wanna see packet dumps?
17:39 JoeJulian I know too little about NFS to be a lot of help there, but I could try later. I've actually got to get this machine finished and over to the CoLo today.
17:39 ulimit joined #gluster
17:39 skered- You can see the requrest for a file handler, a blank reply from gluster, request again, real reply with file attrs etc...
17:39 skered- k
17:40 JoeJulian Sounds like you have a pretty clear demonstration of a bug, though. I would suggest you file a bug report with that dump.
17:40 glusterbot http://goo.gl/UUuCq
17:44 skered- lbalbalba: Same resault for non-root users
17:46 lbalbalba skered-: thanks. just wondering is all. guess you better file that bug report as JoeJulian already suggested and attach the dump to it then.
17:47 lbalbalba skered-: btw, did you look at all the log files or just nfs.log ?
17:47 skered- All of them
17:47 skered- At least 'ls -ltr' seconds after the error
17:47 skered- newest log was 10 minutes ago
17:48 skered- That the time that is
17:48 lbalbalba skered-: weird
17:49 lbalbalba skered-: nothing in the client Tru64 log files either ? no idea if there are any and where they should be though
17:49 skered- /var/adm FYI :)
17:50 diegows joined #gluster
17:50 lbalbalba skered-: :)
17:51 bivak joined #gluster
17:51 lbalbalba skered-: if you file that bug report and attach the dump to it then people can take a look at that, see if theres anything in that
17:55 skered- I do see
17:55 skered- vmunix: NFS3 LOOKUP: Directory attributes from gluster.... are wrong
17:56 skered- Only one message can I can't recall what I was doing at that time
17:56 plarsen joined #gluster
17:56 vpshastry joined #gluster
17:59 phox joined #gluster
17:59 phox Hi.  Uh.  How do I STOP gluster from listening on TCP port 2049?
18:01 skered- set nfs.disable?
18:01 lbalbalba @ports
18:01 glusterbot lbalbalba: glusterd's management port is 24007/tcp and 24008/tcp if you use rdma. Bricks (glusterfsd) use 24009 & up for <3.4 and 49152 & up for 3.4. (Deleted volumes do not reset this counter.) Additionally it will listen on 38465-38467/tcp for nfs, also 38468 for NLM since 3.3.0. NFS also depends on rpcbind/portmap on port 111 and 2049 since 3.4.
18:02 phox lanning: note "depends" not how to turn it off :)
18:02 phox skered-: that's a lot of volumes I need to set that for if I can't do it globally :/
18:02 getup- joined #gluster
18:02 JoeJulian Yep, no global option. :(
18:02 phox so as long as I turn it off for every volume it will stop hogging that port?
18:03 phox I guess that's what I do, and update my puppet stuff to disable it automagically w/ new volume
18:03 JoeJulian for d in $(gluster volume list) ; do gluster volume set $d nfs.disable on; done
18:03 phox yep
18:04 phox would just be tidier if it was global, but yeah this is very tractable, just lame ;)
18:05 phox yep appears to have gotten TFO, thanks
18:05 rotbeard joined #gluster
18:09 bivak joined #gluster
18:19 lbalbalba well i guess ive done enough damage for one day. bye
18:23 sweeper left #gluster
18:29 jskinner joined #gluster
18:31 criticalhammer joined #gluster
18:31 criticalhammer Hi everyone
18:31 criticalhammer Any gluster folks around willing to chat?
18:33 JoeJulian hi
18:33 glusterbot JoeJulian: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
18:33 JoeJulian criticalhammer: ^
18:33 mtanner_ joined #gluster
18:36 criticalhammer I'm in the research stages of a 48TB gluster setup. I've wanted to know what peoples feelings are for using mdadm software RAID. Does the gluster community have any hard issues using software raid?
18:36 JoeJulian No problem at all.
18:37 jskinner joined #gluster
18:37 criticalhammer Great to hear!
18:38 tg3 joined #gluster
18:38 elyograg hardware raid is better.  if that's not an option, software raid can work very well.  Except for desktop systems, stay away from fakeraid - motherboard RAID chipsets like Intel ICH that use the system CPU and operating system drivers, rather than dedicated hardware.
18:39 criticalhammer elyograg: What benefits, with todays hardware, does hardware raid provide over software raid?
18:39 criticalhammer My plan is to use mdadm as my software raid
18:40 criticalhammer and have dedicated bricks running 2.2 xeon quad core cpu with 8gig of ram
18:40 elyograg a dedicated CPU for raid operations, plus in many cases, cache memory that is either NVRAM or backed by a battery.
18:41 criticalhammer a server UPS solves the battery issue
18:41 elyograg NVRAM or battery backed cache allows you to turn on write caching at the RAID controller, which is awesome for performance in workloads where you're not writing constantly.
18:42 criticalhammer yeah thats the only real beneift I can think of for hardware raid.
18:42 criticalhammer because write caching is amazing
18:42 JoeJulian scsi command response times are much shorter with hardware raid.
18:42 JoeJulian for most commands.
18:43 criticalhammer True. With that said can a faster CPU speed up the response times?
18:43 JoeJulian But hey, if you don't get the performance you want with software raid, you can always change it later...
18:43 JoeJulian No. That's entirely dependent on the i/o bus.
18:43 criticalhammer Okay.
18:44 JoeJulian With hardware raid you're offloading the i/o bus wait times to the raid controller. Your system only needs to be concerned with pcie wait times then.
18:45 criticalhammer Good point.
18:46 elyograg If you're planning to do raid, you get what you pay for with hard drives.  Enterprise drives IMHO are worth the premium over desktop drives.  As long as everything's working perfectly, desktop drives don't really perform differently, but when errors start to happen, they behave very differently.
18:47 criticalhammer I agree about the enterprise drives. I've seen the differences myself.
18:49 LoudNoises joined #gluster
18:54 glusterbot New news from newglusterbugs: [Bug 1031164] After mounting a GlusterFS NFS export intial cd'ing into directories on a Tru64 resaults in a Permission Denied error <http://goo.gl/oC9dnX> || [Bug 1031166] Can't mount a GlusterFS NFS export if you include a directory in the source field on Tru64 (V4.0 1229 alpha) mount <http://goo.gl/Rh8LGu>
18:56 vpshastry joined #gluster
18:59 chirino joined #gluster
19:02 sjoeboo joined #gluster
19:03 Technicool joined #gluster
19:04 failshel_ joined #gluster
19:10 vpshastry left #gluster
19:15 phox hardware RAID is not better
19:16 phox frequently it obscures things not actually happening in the order they're supposed to which can lead to corruption
19:17 phox proper filesystems (e.g. ZFS) are properly CoW and operations on them are effectively atomic because of their architecture.
19:18 phox or you could have some flakey pile of garbage aboard your overpriced hardware RAID card
19:19 JoeJulian That sounds like some rather emotionally backed opinion.
19:19 phox criticalhammer: I highly recommend ZFS over mdadm.  It's much better behaved and less annoying to manage.  zfsonlinux 0.6.3 is likely to be a very good release and is due out this year.
19:19 phox JoeJulian: having just dealt with entire SSDs getting wiped because of non-peer-reviewed firmware from one of the world's leading RAID card vendors... no
19:20 phox and if you really want NVRAM or similar that's faster than your spinning rust, ZFS SLOG has you covered
19:21 criticalhammer phox: Ive never used zfs before, but i've heard good things about it. How well does it handle degraded RAID rebuilds?
19:21 JoeJulian Some would argue that zfs is still unstable and not ready for production.
19:21 phox criticalhammer: can you elaborate on the question?  generally quite well.
19:21 Liquid-- joined #gluster
19:21 phox JoeJulian: I would argue that commercially-released storage HBA firmware is not ready for production, per my previous comment :P
19:22 phox JoeJulian: LSI SAS HBAs ate my data.
19:22 phox the solution to reliably crap is generally to move the redundancy farther up the stack.
19:22 ira JoeJulian: I've run non-linux ZFS systems.  They were quite production ready.  :/.
19:22 ira Can't speak to ZoL.
19:22 phox got servers that might go down?  don't dick around with load-balanacing if you can make the _application_ support load balancing.
19:22 phox thus my comment about 0.6.3
19:23 JoeJulian Heh, but hardware raid cards have been around and (mostly) stable for over a decade. Like all hardware, closed-source manufacturers can and do ship faulty firmware occasionally.
19:23 phox as much as it's a bit rough around the edges, consider e.g. ext4 eating people's data
19:23 criticalhammer I personally dont mind the mdadm management phox, ive grown used to it
19:23 phox by design ZFS doesn't go around doing things like that very easily
19:23 phox I'm used to it not being very informative, and punting drives because of slow response times and other undesirable, badly-thought-out things like that
19:23 ira JoeJulian: Which card's firmware has been around over a decade?
19:24 JoeJulian ira: agreed. I erroneously make statements based on my linux-centric focus occasionally.
19:24 phox JoeJulian: thus my point about moving redundancy farther up the stack.
19:24 davidbierce Isn't gluster supposed to sit on top of the stupid hardware/raid and deal with the failure....
19:24 criticalhammer I don't think so davidbierce
19:24 phox davidbierce: true, although it's still desirable to not have that invoked int hef irst place
19:24 davidbierce I agree, but I think big....things fail all the time in big
19:25 phox that's like saying hey, we have seatbelts, so who cares about ABS and ESC and stuff!
19:25 JoeJulian Personally, I use glusterfs replication to handle fault-tolerance across servers, rather than caring about it happening on the server itself.
19:25 phox davidbierce: they do, but lower frequency still gets you truly-critical far less often
19:25 criticalhammer JoeJulian: with my build I was thinking the same thing
19:25 phox yeah nuts go breaks, we have airbags!
19:25 JoeJulian ira: oh, and I have 3ware cards still in operation whose firmware is probably a decade old. :)
19:25 phox that's a bad attitude
19:25 ira JoeJulian: I'm sorry.
19:26 phox JoeJulian: 3ware fails the "hardware RAID performs better" test
19:26 phox 5-10 years ago softraid was _slaughtering_ 3ware HW RAID
19:26 ira And when was the last update? :)
19:26 phox ira: "new" is not necessarily better so as much as I'm on your side I'm going to have to poke a hole in that comment ;)
19:27 phox "stable and doesn't break" is what's important.  this is functionality, not security, right?
19:27 ira phox: I'm speaking about bug fixes etc.. or is it abandoned...
19:27 davidbierce Not saying crash the car or don't install new better features.  I'm just saying, drive a truck, a semi, 3 compacts, 2 sedan and deal with the fact sometimes your gonna have an accident
19:27 phox as far as I'm concerned HBAs and RAID cards should have provable bloody firmware.
19:27 phox anyone doing otherwise should be shot.  which means a lot of people should be shot.
19:27 ira phox: Your mouth to god's ears.
19:28 ira phox: Though I'd rather not shoot the people ;)
19:29 phox no I'd rather just have people buying this stuff actually stop buying lowest-bidder shite and start caring about and understanding what matters
19:29 phox i.e. provability
19:29 phox drive firmware and HBA firmware should be provable
19:29 phox they perform a _very_ limited set of operations
19:30 phox and anyone wanting to extend e.g. ATA specs in ways that make this significantly more difficult can go jump off a bridge and/or be told that their toys can be implemented in a mode of operation that's not proven and then let customers decide if they want bells and whistles or their data.
19:30 phox :)
19:31 phox provability of happy little finite state machines, that being what HDDs are/do, is much more trivial than some of the other stuff that's been proven
19:31 phox and yet also more important than e.g. LaTeX
19:37 hflai joined #gluster
19:37 NuxRo joined #gluster
19:37 Peanut joined #gluster
19:37 diegol_ joined #gluster
19:37 tqrst joined #gluster
19:37 VerboEse joined #gluster
19:37 tqrst joined #gluster
19:37 RedShift joined #gluster
19:37 baoboa joined #gluster
19:37 compbio joined #gluster
19:38 jmeeuwen joined #gluster
19:46 sohoo joined #gluster
19:46 sohoo After peer probe, in the remote machine, the originating peer's information is stored with IP address instead of hostname. To fix this, probe once in the reverse direction, using the hostname of the original server. This will update the original server's IP with its hostname.
19:47 sohoo is it possible to fix the hostname issue after the storage is active for a while or just after the peer probe
19:51 RicardoSSP joined #gluster
19:52 RicardoSSP joined #gluster
19:53 lbalbalba joined #gluster
19:53 semiosis sohoo: it's difficult
19:54 semiosis sohoo: idk any easy way.  the hard way is to stop glusterd & replace ip with hostname in the volfiles
19:54 sohoo semiosos tnx, i ment the reverse probe whats was suggest by gluster to fix tge first probing server
19:56 sohoo you mean stop glusterd on all servers and update the volfiles manualy? that would work with no issues?
19:59 semiosis i dont know
20:00 semiosis try just doing the probe
20:00 semiosis but i suspect that won't change the existing volume config
20:00 semiosis maybe you can use a replace-brick commit force to update the brick address in the volume, not sure about that either
20:01 JoeJulian If it were me, I'd do the stop everthing and replace method.
20:01 JoeJulian Then I'd punish myself for building a dynamic system with static entities.
20:02 sohoo i need to make full OS upgrade to that host, would be better to fix that now befor upgrading it but i want to be sure im not breaking anything by updating it
20:04 zwu joined #gluster
20:04 sohoo joe, stoping all glusterd and update thier volfiles with the new hostname will work for sure?
20:07 Technicool joined #gluster
20:12 semiosis joined #gluster
20:14 sohoo its too scary to update that then find out big issues
20:15 failshell joined #gluster
20:18 JoeJulian sohoo: find /var/lib/glusterd -type f | xargs sed -i 's/ip\.address/hostname/g'
20:20 sohoo Joe: from where(which node)
20:20 ipvelez hello good afternoon
20:21 ipvelez I was trying to update gluster using the yum repositories at http://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.repo/glusterfs-epel.repo  but yup update says its not found, error 404
20:21 glusterbot <http://goo.gl/5beCt> (at download.gluster.org)
20:30 ira joined #gluster
20:31 sohoo joe: i rather do it manualy on each node. will it work for sure?
20:34 semiosis sohoo: no one can answer that
20:34 bugs_ joined #gluster
20:34 semiosis test it out before you try on your production cluster
20:35 diegol__ Guys I have lost gluster config files so I tried to recreate my volumes without losing data but when I try to start the volume I have an error regarding volume id mismatch any ideas? I have read that I can remove the extended attributes but not sure :-S
20:36 semiosis prefix of it is already part of a volume
20:36 semiosis or a prefix of it is already part of a volume
20:36 glusterbot semiosis: To clear that error, follow the instructions at http://goo.gl/YUzrh or see this bug http://goo.gl/YZi8Y
20:36 semiosis diegol__: ^^^ first link
20:37 diegol__ yeah semiosis thanks that was the first link I have checked
20:37 diegol__ now you have just confirmed my thoughts :-)
20:37 diegol__ thanks again
20:42 diegol__ semiosis: but now I get this Failed to get extended attribute trusted.glusterfs.volume-id for brick
20:45 TDJACR joined #gluster
20:51 sohoo semiosis: tnx ill try.. thought someone here tried it. when ill try ill know for sure :)
20:52 lbalbalba joined #gluster
20:55 lbalbalba hi. just wondering if anyone knows: can you turn nfsacl support on/off with nfs exported gluster volumes, and/or how permissions are handled when a node (client/server) doesnt support nfsacls ?
21:04 lbalbalba nvrmind, found it: mount -t nfs -o noacl
21:07 tyl0r joined #gluster
21:25 davidbierce joined #gluster
21:26 zwu joined #gluster
21:57 criticalhammer left #gluster
22:01 sticky_afk joined #gluster
22:01 stickyboy joined #gluster
22:07 geewiz joined #gluster
22:12 sticky_afk joined #gluster
22:12 stickyboy joined #gluster
22:12 zerick joined #gluster
22:43 badone joined #gluster
23:07 jskinner_ joined #gluster
23:13 zaitcev joined #gluster
23:36 Nuxr0 joined #gluster
23:40 Liquid-- joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary