Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-10-06

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:02 cjanbanan joined #gluster
01:05 cjanbanan joined #gluster
01:17 Zordrak joined #gluster
01:29 sputnik13 joined #gluster
01:40 sputnik13 joined #gluster
01:47 cjanbanan joined #gluster
01:49 dtrainor joined #gluster
01:50 plarsen joined #gluster
01:51 an joined #gluster
01:51 plarsen joined #gluster
02:00 suliba_ joined #gluster
02:15 an joined #gluster
02:17 cjanbanan joined #gluster
02:25 cjanbanan joined #gluster
02:42 an joined #gluster
03:02 cjanbanan joined #gluster
03:33 sputnik13 joined #gluster
03:36 side_control joined #gluster
03:45 an joined #gluster
03:46 sputnik13 joined #gluster
03:53 cjanbanan joined #gluster
03:55 itisravi joined #gluster
04:00 prasanth_ joined #gluster
04:34 cyber_si joined #gluster
04:43 soumya joined #gluster
04:43 justinmburrous joined #gluster
04:46 jiffin joined #gluster
04:53 ramteid joined #gluster
05:10 nbalachandran joined #gluster
05:12 sputnik13 joined #gluster
05:13 nbalachandran joined #gluster
05:25 an_ joined #gluster
05:27 spandit joined #gluster
05:30 an joined #gluster
05:31 an_ joined #gluster
05:44 sputnik13 joined #gluster
05:49 hagarth joined #gluster
05:52 jiffin joined #gluster
05:55 sputnik13 joined #gluster
06:00 R0ok_ joined #gluster
06:02 cjanbanan joined #gluster
06:14 an joined #gluster
06:19 an joined #gluster
06:19 kdhananjay joined #gluster
06:26 an_ joined #gluster
06:30 Philambdo joined #gluster
06:32 cjanbanan joined #gluster
06:36 an joined #gluster
06:38 ricky-ti1 joined #gluster
06:48 XpineX joined #gluster
06:50 anands joined #gluster
06:51 an joined #gluster
06:52 an_ joined #gluster
06:55 ekuric joined #gluster
07:05 Fen2 joined #gluster
07:05 nbalachandran joined #gluster
07:06 ctria joined #gluster
07:08 ivok joined #gluster
07:10 ronis joined #gluster
07:15 cjanbanan joined #gluster
07:15 ricky-ticky1 joined #gluster
07:26 Fen2 Hi, i don't understand the difference between striped replicated volume & distributed striped replicated volume ? :/
07:27 cjanbanan joined #gluster
07:36 kdhananjay joined #gluster
07:41 fsimonce joined #gluster
07:52 jiffin joined #gluster
07:53 ronis Have you looked in the internet ? :D
07:55 ronis Fen2, try to understand the difference between these words...
07:56 Fen2 I went here but i need more details : http://www.gluster.org/wp-content/up​loads/2012/05/Gluster_File_System-3.​3.0-Administration_Guide-en-US.pdf (page 27 & 28)
07:57 coreping_ joined #gluster
07:58 ronis Fen2, take a look to this link: https://access.redhat.com/documentation/en-​US/Red_Hat_Storage/2.1/html/Administration_​Guide/chap-User_Guide-Setting_Volumes.html
07:58 glusterbot Title: Chapter 8. Red Hat Storage Volumes (at access.redhat.com)
08:00 Fen2 i just don't understand in what there is a change between distributed and non-distributed...
08:01 justinmburrous joined #gluster
08:06 Fen2 Like Striped volume VS Distributed Striped volume...
08:06 Fen2 Or Replicated volume VS Distributed Replicated volume
08:07 Fen2 And Striped Replicated volume VS Distributed Striped Replicated volume ...
08:07 justinmburrous joined #gluster
08:14 Slydder joined #gluster
08:15 Slydder hey all
08:17 Fen2 Slydder: Hi :)
08:19 Slydder have a strange situation. when mounting a gluster share I get nothing but question marks when I do an 'ls' and I cannot even access anything inside the directories with these question marks.
08:33 ronis Slydder, tell more about your gluster deployment, like OS version, gluster version, etc.
08:34 Slydder debian wheezy, gfs 3.5.2, 2 node replication using fuse mounts.
08:35 Slydder thought it may have been the fuse problem where you have to remount. however, it is the same on both nodes and remounting has no effect.
08:36 ronis Is this a production server ?
08:37 Slydder not yet
08:37 Slydder was supposed to be but not with the way this is going.
08:37 ronis you could try to restart glusterfs services on both servers and then check.
08:38 ronis If it wont help, try to stop gluster volume and then start it again using ¨gluster¨ command
08:40 ronis Slydder, are you using gluster for virtualization storage ?
08:40 Slydder nope.
08:43 ronis have you restarted everything ?
08:45 ivok joined #gluster
08:48 harish joined #gluster
08:52 Slydder nothing seems to help. stopped the volume then stopped the server. started the server then started the volume made sure all was as it should be then did the mount and still nothing but question marks. first time I have ever had this issue.
08:53 Slydder going to try nfs and see if that helps
08:55 ronis have you restarted your server ?
08:55 ronis i mean, reboot
08:55 harish joined #gluster
08:56 Slydder nope
08:56 Slydder both containers are on production host units. the containers themselves are not production though.
09:01 vimal joined #gluster
09:06 ws2k3 when my volume is already running how can i see whick bricks are distributed and which are replicated ?
09:07 R0ok_ gluster volume info <<VOLNAME>>
09:07 Fen2 gluster volume status
09:07 R0ok_ ws2k3: ^^
09:07 Fen2 gluster volume info
09:07 Slashman joined #gluster
09:07 Fen2 gluster volume status "volname" detail
09:09 ws2k3 all this command do not tell me which brick is a replica
09:10 ws2k3 all of them does list the brick show the status disk free inode free etc but it does not tell me which brick is replica en which is distributed
09:11 Fen2 you see if the volume is replicated or not, so bricks are like the volume
09:14 R0ok_ ws2k3: as Fen2 as said, it will show you the volume type & bricks(including path) in the volume, based on that you can know which brick belongs to which volume & what type it is
09:14 ws2k3 hmm i dont understand that i do see all my brick but how does that tell me which brick is replicated an which one is distributed?
09:15 Fen2 with : gluster volume status "volname" detail, you see your volume right ?
09:16 Fen2 so if your volume is distributed, all the brick below is also distributed
09:16 ws2k3 yes i see the volume
09:16 Fen2 if your volume is replicated, all your brick in the volume are replicated too
09:16 ws2k3 its a distributed replicated volume
09:17 Fen2 So all your bricks are replicated distributed
09:17 ws2k3 are you sure ?
09:17 ws2k3 cause when i created the volume i need to provide which one are distributed and which one are replicated
09:17 Fen2 yes, you have created your volume like that : gluster volume create replicate .....
09:18 ws2k3 is it not that the volume is 2 disks and the other 2 are just copy's of the first 2 disks..
09:18 Fen2 .... server:/brick server:/brick ....
09:18 ws2k3 so how can i find out which disks are distributed and which one are replicated
09:18 ws2k3 or do i basicly missunderstand glusterfs thats a possebility to
09:19 Fen2 hum... ok you want the original brick right ?
09:20 ndevos ws2k3: maybe lsgvt can help you: https://forge.gluster.org/lsgv​t/lsgvt/blobs/raw/master/lsgvt
09:20 ws2k3 yes i wanne know whcih one are the orginal and which one are the backup bricks
09:20 ndevos it is a tool that shows a tree like diagram for your volumes
09:20 Fen2 maybe there is no difference, files are stored in the same time
09:21 ws2k3 Fen2 so of my 4 disks 2 can go down ?
09:21 ws2k3 does not matter which one as long as 2 bricks are online it keeps working ?
09:21 ndevos and indeed, it doesnt work like 'original' and 'backup' bricks, a client writes to all bricks in a replica-set
09:22 Fen2 i'm not sure but like ndevos said, there is no difference, so while one of your brick is still alive, you can work
09:23 ws2k3 okay so just for my imagenation.. if i have a distributed replicated volume and i have 100 bricks 50 can go offline and it still works? does not matter at all which 50 as long as 50 bricks are still online my data is fine?
09:23 ndevos lets make it a little simpler, 2 servers, both 2 bricks - serverA:/brick1, serverA:/brick2 and serverB:/brick1 + 2
09:23 Fen2 no, if your data are replicated on one of your 50 bricks
09:24 ndevos if you make a distributed-replicated volume, like 'gluster volume create MYVOL serverA:/brick1 serverA:/brick2 serverB:/brick1 serverB:/brick2'
09:25 ndevos (there should have been a 'replica 2' in the command...)
09:25 ws2k3 yes i know and i have placed that replace 2 in the command
09:25 ws2k3 replica 2*
09:26 ndevos the replicated configuration, would be replica-pair-1 (serverA:/brick1 serverA:/brick2) + replica-pair-2 *serverB:/brick1 serverB:/brick2)
09:26 ndevos in this case, if serverA goes down, you do not have one brick from replica-pair-1 left
09:27 ws2k3 yes i understand that but if i would made it like this i would be pretty stupid
09:27 ndevos it is much better to create the volume like this: create MYVOL serverA:/brick1 serverB:/brick1 serverA:/brick2 serverB:/brick2
09:27 ws2k3 just never place a replica brick on the same server
09:28 ws2k3 yeah just never place a replica brick on the same server that is not smart
09:29 ndevos this principle stays the same when you have 100 bricks, and 50 go down...
09:29 ws2k3 so if i understand correctly lets assume i have 100 servers every server has 1 brick = 100 bricks,(and i would do it like create MYVOL serverA:/brick1 serverB:/brick1 serverA:/brick2 serverB:/brick2 then 50 servers/brick can go offline and it would still be working ?
09:30 ws2k3 and it would not matter which 50 as long as i have 50 > servers/bricks online then it would stay working
09:31 ndevos no, it does matter - if two servers go down, that both have the brick of a replica-2 pair, the contents of that replica-set would become unavailable
09:32 ws2k3 ah ofcrouse i understand
09:32 ndevos :)
09:33 Fen2 www.gluster.org/wp-content/uploads/2012/05/Gluster​_File_System-3.3.0-Administration_Guide-en-US.pdf
09:33 Fen2 page 26
09:33 ws2k3 i just have ALOT of old hardware and i'm trying to figure out if i can use glusterfs to make a stable redundant fast storage cluster of them
09:34 ndevos I guess you can, but 'fast' really depends on a lot of things :)
09:35 ws2k3 cause what are most people doing place a raid controller in the server or use distributed replicated
09:38 ndevos most (?) enterprise deployments use RAID-6 (or 10) for their bricks and distribute-replicated volumes
09:38 Fen2 btw it's recommended to use raid disk
09:47 ws2k3 and it is possible to change the volume at all times right ?
09:48 Fen2 add and remove brick ? yes
09:49 Fen2 ndevos: can you simply explain me what's the difference between distributed striped and striped volume ?
09:49 ws2k3 Fen2 i do know that :D
09:49 Fen2 ws2k3: plz :p
09:49 ndevos go ahead, ws2k3!
09:50 ws2k3 fen2 distributed striping means part 1 of file 1 is one server a and part 2 of file 1 is server b this means you need to access 2 servers to read 1 file
09:51 ws2k3 with distributed spriping it spreads the stripes across the bricks he has
09:51 Fen2 and with only striped volume ?
09:52 ws2k3 correct me if i'm wrong but i think you canot have only striped
09:52 Fen2 https://access.redhat.com/documentation/en-​US/Red_Hat_Storage/2.1/html/Administration_​Guide/chap-User_Guide-Setting_Volumes.html
09:52 glusterbot Title: Chapter 8. Red Hat Storage Volumes (at access.redhat.com)
09:53 ndevos Fen2: rather check the diagrams starting at https://access.redhat.com/documentation/en-US/​Red_Hat_Storage/3/html-single/Administration_G​uide/index.html#Creating_Distributed_Volumes
09:53 glusterbot Title: Administration Guide (at access.redhat.com)
09:53 ws2k3 i have to admit that i dont know
09:54 ndevos ws2k3: yes, you can create a strip-only volume, all the bricks of that volume will have some 'stripes' of the files
09:54 ndevos the bricks will not have complete files on them, but kinda-like 'sparse' files
09:55 ndevos distribute-only, contains complete files on the bricks, but not each brick contains all the files
09:55 ndevos with distribute-only, you can loose a brick, and still access the files on the remaining bricks
09:55 Fen2 if i understand, only striped volume, it's just with 1 gluster-sever ?
09:55 ndevos with stripe-only, you can not access any file when a brick fails
09:56 ndevos no, stripe can use multiple bricks, the file will get split into pieces and each piece get placed on a different brick (more or less)
09:57 ctria joined #gluster
10:03 jiffin joined #gluster
10:04 jiffin joined #gluster
10:08 justinmburrous joined #gluster
10:21 ws2k3 ndevos so when using replica 2 it allways create MYVOL serverA:/brick1 serverB:/brick1 serverA:/brick2 serverB:/brick2 that servera:/brick1 and serverb:/brick1 are one pair ?
10:22 bala joined #gluster
10:23 ws2k3 so you allways define in that structure like replica 2 means create MYVOL serverA:/brick1 serverB:/brick1 are one pair when doing replica 3 create MYVOL serverA:/brick1 serverB:/brick1 serverC:/brick1
10:23 ws2k3 means that are a pair right ?
10:25 ndevos ws2k3: yes, it is like that
10:25 ws2k3 okay thank you
10:25 ndevos you're welcome
10:28 [1]Mortem joined #gluster
10:28 [1]Mortem Hi
10:28 glusterbot [1]Mortem: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
10:29 [1]Mortem I have 2 questions related to Gluster 3.5.2
10:29 [1]Mortem Where can I find a documentation (Install/Admin) related tot his release?
10:31 ws2k3 http://www.gluster.org/wp-content/up​loads/2012/05/Gluster_File_System-3.​3.0-Administration_Guide-en-US.pdf
10:31 ws2k3 http://www.gluster.org/community/d​ocumentation/index.php/QuickStart
10:31 [1]Mortem Why in the geo-replication, there is no mecanism locking the file of the slave to ensure the file can't be loaded by a thrid party before it has been fully transfer? In our case we send large files (100+ GB) and we want ensure they can't be loaded before being fully transfered. With older release, the file in transfer were with the random extension added so that was good, but now, not anymore
10:32 [1]Mortem ws2k3: So no specific documentation for 3.5.x version? The geo-replication being totally new, it is not very accurate to use 3.3, no?
10:38 ws2k3 dont know where to find it
10:45 Pupeno joined #gluster
10:53 diegows joined #gluster
11:06 stickyboy joined #gluster
11:07 bala joined #gluster
11:08 justinmburrous joined #gluster
11:13 virusuy joined #gluster
11:13 virusuy joined #gluster
11:15 ninkotech_ joined #gluster
11:15 ninkotech__ joined #gluster
11:21 mojibake joined #gluster
11:32 soumya joined #gluster
11:40 LebedevRI joined #gluster
11:59 glusterbot New news from newglusterbugs: [Bug 958781] KVM guest I/O errors with xfs backed gluster volumes <https://bugzilla.redhat.com/show_bug.cgi?id=958781>
11:59 Fen1 joined #gluster
12:00 calum_ joined #gluster
12:04 anands joined #gluster
12:08 Fen1 Hi, i'm back. Can you explain me what is exactly the difference between a replicated volume and a distributed replicated volume ? because i don't understand yet
12:09 justinmburrous joined #gluster
12:19 Fen1 Is the difference be in the fact, there is just more brick ?
12:26 ndevos Fen1: if you have N bricks, and use 'replica 2', you have N/2 replica-pairs
12:27 ndevos now, if you have more than 1 replica-pair, distribute will place files on one replica-pair (2 bricks)
12:29 anands joined #gluster
12:30 Fen1 ok thx :)
12:31 Fen1 So is it better to make a lot of replicated volume or just one huge distributed replicated volume ?
12:31 ndevos it depends...
12:32 ndevos if your replicated volume runs out of space, you likely want to add bricks and the volume becomes distribute-replicated automatically
12:33 ndevos if you use your volumes for archiving, you may want to add more volumes when one gets full
12:34 R0ok_ ndevos: i think you meant add more bricks instead of 'add more volumes' right ?
12:35 ndevos R0ok_: no, I really meant volumes - you can extend your storage with adding bricks to an existing volume, or create new volumes with new bricks
12:36 justinmburrous joined #gluster
12:36 ndevos which is most suitable, really depends on the use-case and all
12:37 itisravi joined #gluster
12:37 R0ok_ ndevos: i did extended the size of some volume by adding more bricks to it..but that was sometime ago
12:38 jontesehlin joined #gluster
12:38 ndevos R0ok_: yeah, that is one common way, and I think most people do it like that
12:39 jontesehlin any one running GlusterFS on SmartOS? :)
12:40 ndevos Fen1: btw, I tend to create different volumes for different tasks/projects/.. and extend themwith bricks when needed, but a new project would get its own new volume
12:40 Fen1 R0ok_: * I have extended,  you mean ?
12:40 Fen1 ndevos: ok thx a lot, i have now understand ;)
12:41 ndevos jontesehlin: I have not heard of anyone doing that - but that does not mean nobody does it ;)
12:42 R0ok_ ndevos: as always, you have to rebalance :)
12:44 _NiC joined #gluster
12:44 ira_ joined #gluster
12:46 julim joined #gluster
12:49 _NiC joined #gluster
12:49 chirino joined #gluster
12:57 ctria joined #gluster
13:00 ira joined #gluster
13:01 theron joined #gluster
13:02 bennyturns joined #gluster
13:02 Slydder gluster will not start with nfs enabled. any ideas why?
13:03 SmithyUK joined #gluster
13:04 SmithyUK Hi guys, quick question. What happens when a disk gluster is using gets full during a forced rebalance? Will gluster cope with this? Does it impact the DHT?
13:05 SmithyUK I should add that gluster seems to be ignoring my cluster.min-free-disk of 5 and writing to it regardless
13:06 plarsen joined #gluster
13:23 msmith joined #gluster
13:25 Slydder does ANYONE have an idea why nfs support is not starting when I start glusterd even when rpcbind and rpc.statd are running and no nfs servers are running?
13:30 ndevos Slydder: you can probably find more details in /var/log/glusterfs/nfs.log
13:31 Slydder it's empty. last entry found in nfs.log.1 is from the 29th of last month.
13:32 rolfb joined #gluster
13:34 Slydder and funnily enough. trying to start glusterd with --debug --no-daemon has no effect and things just go on as usual in the background.
13:43 ndevos just try 'glusterd --debug', --no-daemon gets set automatically
13:44 Slydder nope
13:44 theron joined #gluster
13:46 nueces joined #gluster
13:46 Norky joined #gluster
13:49 Fen1 do you really need to do : gluster start ?
13:49 jmarley joined #gluster
13:50 Fen1 *glusterd start ? with the last version i have not done once... !?
13:51 Fen1 and it's work
13:51 ndevos Slydder: and there are no glusterd and glusterfs processes running on that system?
13:51 Slydder nope. all shut down.
13:52 ndevos well, at least there should be something in /var/log/glusterfs/etc-glusterfs-glusterd.log ?
13:56 Fen1 Slydder: You really need to run glusterd ?
13:57 Slydder ok. got it going. was actually my fault.
13:58 Slydder forgot I had the cluster running and stopped gfs. this, of course, triggered a failover of gfs and the ip for gfs. the rest I don't think I need to elaborate on.
14:00 Fen1 ndevos: have you run glusterd at any moment ?
14:00 jmarley joined #gluster
14:02 itisravi joined #gluster
14:03 tdasilva joined #gluster
14:05 chirino joined #gluster
14:06 coredump joined #gluster
14:06 Fen1 ok nvm :p
14:16 wushudoin joined #gluster
14:23 justinmburrous joined #gluster
14:25 Fen1 when it says : 29th Sep, 2014 - 3.6.0 GA.       What GA means ?
14:29 glusterbot New news from newglusterbugs: [Bug 1149723] Self-heal on dispersed volumes does not restore the correct date <https://bugzilla.redhat.co​m/show_bug.cgi?id=1149723> || [Bug 1149725] Self-heal on dispersed volumes does not restore the correct date <https://bugzilla.redhat.co​m/show_bug.cgi?id=1149725> || [Bug 1149726] An 'ls' can return invalid contents on a dispersed volume before self-heal repairs a damaged directory <https:
14:39 ppai joined #gluster
14:41 redgoo joined #gluster
14:50 mojibake Fen1: GA= General Availabilty I think.
14:51 Fen1 mojibake: so this planning is wrong ?
14:51 rwheeler joined #gluster
14:52 mojibake No idea. Only commenting the GA usually stands for "General Availbility"
14:52 itisravi joined #gluster
14:52 mojibake Like RTM. Release to manufacturing.
14:53 bene2 joined #gluster
14:54 kkeithley The planning is the planning. 3.6.0 has slipped, and that was announced here as well as on the mailing lists.  3.6.0beta3 was released three days ago
15:03 diegows joined #gluster
15:05 cjanbanan semiosis:That makes perfect sense, but if that is the case I don't understand how read performance can be increased when reading a GlusterFS file system compared to reading directly from the underlying FS(?). Apparently it depends on file size. Maybe the cache plays tricks on me...
15:05 DV_ joined #gluster
15:07 hchiramm_ joined #gluster
15:10 DV joined #gluster
15:11 DV__ joined #gluster
15:14 theron joined #gluster
15:19 julim joined #gluster
15:24 longshot902 joined #gluster
15:24 justinmburrous joined #gluster
15:25 daMaestro joined #gluster
15:32 necrogami joined #gluster
15:32 ninkotech joined #gluster
15:34 hchiramm_ joined #gluster
15:38 itisravi joined #gluster
15:39 DV__ joined #gluster
15:42 necrogami joined #gluster
15:45 sprachgenerator joined #gluster
15:59 krypto_ joined #gluster
16:08 krypto_ hi i am using 3PAR storage and plan to try glusterfs,but not sure how to connect it to get higher IOPS.Our application needs 1000 iops ,i have 4 servers to try gluster,any suggestion on number of disks to use better iops
16:14 Slydder joined #gluster
16:14 Slydder hey all
16:19 rotbeard joined #gluster
16:20 jobewan joined #gluster
16:20 Slydder so. I still have a problem that the glusterfs nfs process is still not starting on one node. I can start it per hand but it won't start on it's own. any ideas?
16:25 MacWinner joined #gluster
16:28 _Bryan_ joined #gluster
16:37 sputnik13 joined #gluster
16:40 soumya joined #gluster
16:44 ronis joined #gluster
16:47 ronis_ joined #gluster
17:07 zerick joined #gluster
17:18 itisravi joined #gluster
17:21 ndevos Slydder: you do not happen to have changed the nfs.disable option for the volume?
17:26 cjanbanan joined #gluster
17:26 justinmburrous joined #gluster
17:35 Slydder no. it works on one node but not the other.
17:35 Slydder 2 node replication and nfs starts on the one but not the other.
17:36 Slydder even reinstalled nfs-common to ensure it wasn't corrupted somehow.
17:36 Slydder as well as rpcbind.
17:41 julim joined #gluster
17:49 cfeller joined #gluster
17:49 atrius joined #gluster
17:53 justinmburrous joined #gluster
17:57 ndevos Slydder: when nfs is not running, there should be no mount, nfs, nlockmgr or the like in the output of 'rpcinfo' - can you check that?
17:59 diegows joined #gluster
17:59 cjanbanan joined #gluster
18:00 ron-slc joined #gluster
18:03 Slydder ndevos: rpcinfo shows no mountd/nfs entries on the node that is not working correctly. on the one that works there are mountd/nfs entries
18:05 Slydder just tested rpc connection to localhost port 111 with telnet and connection is established so no firewall in the way. socket is also available.
18:06 ndevos Slydder: well, you also mentioned you can start the glusterfs/nfs process manually, so I think there is something up wit glusterd
18:07 ndevos Slydder: can you run 'glusterd --log-level=DEBUG' and see if that produces something in the logs?
18:09 chirino joined #gluster
18:11 chirino_m joined #gluster
18:12 chirino_m joined #gluster
18:14 Slydder [2014-10-06 18:11:07.787019] E [socket.c:2161:socket_connect_finish] 0-glusterfs: connection to 127.0.0.1:24007 failed (Connection refused)
18:14 Slydder but I have gluster binding to a specific ip address
18:15 chirino joined #gluster
18:18 Slydder [2014-10-06 18:16:25.945319] E [glusterfsd-mgmt.c:1601:mgmt_rpc_notify] 0-glusterfsd-mgmt: failed to connect with remote-host: localhost (No data available)
18:18 vimal joined #gluster
18:19 ndevos Slydder: hmmm, what version of gluster is that? I didnt know there were any options to have the processes listening on only selected IPs?
18:19 Slydder transport.socket.bind-address
18:20 Slydder and glusterd is the only process listening on 24007 but that not on localhost
18:20 ThatGraemeGuy joined #gluster
18:24 ndevos do you use that option on both of your storage servers?
18:24 Slydder jepp
18:24 Slydder and they are both connected to each other.
18:25 ndevos I dont see any reference to it in the starting of the glusterfs/nfs path, I would assume glusterfs/nfs contacts localhost
18:26 ndevos well, one of my systems seems to use a unix-socket to connect glusterfs/nfs to glusterd...
18:27 Slydder if I comment the bind out it works.
18:27 Slydder not good. very bad actually
18:27 Slydder unless you can bind to 2 addresses.
18:30 Slydder not good at all. I do NOT want gluster listening on my external address. I don't care if I can block it or not. it should work on the addresses I tell it to and not whatever it decides it wants to work on.
18:34 thermo44 Guys, someone with experience in glusterfs with Windows Clients (hundreds) please let me know, I have big file servers that are in need of a good configuration, Contact me PM, or if anyone can point me to the right direction, let me know. Thanks in advance!
18:34 ndevos hmm, I do not think you can specify two addresses for that, even if you put it as a hostname in /etc/hosts
18:36 ndevos Slydder: there must be a difference between your two systems, maybe a different version, selinux, or something?
18:37 Slydder no diff. same debian release, up to date, same gfs. but I did notice that the bind is missing on the one server where it was working correctly. so that explains why it was working on the one but not the other.
18:38 Slydder still not correct though. if you offer a way to bind to an address then you should respect that option and not decide to just ignore it.
18:39 ndevos okay, that explains and on my 3.5.x system glusterfs/nfs contacts glusterd on port localhost:24007 too
18:39 Slydder yeah. which is dumb because they allow you to bind to a single address. which means nfs should ask there and not localhost.
18:40 ndevos yes, and you should file a bug for that, it definitely is something we can change
18:40 glusterbot https://bugzilla.redhat.com/en​ter_bug.cgi?product=GlusterFS
18:41 ndevos probably not only glusterfs/nfs does that, also any other daemons started by glusterd (like self-heal, quota, ...)
18:41 Slydder will do
18:41 Slydder fuse works correctly
18:41 Slydder nfs not
18:41 Slydder shd also
18:41 ndevos yeah, fuse contacts the hostname you tell it to, it does not rely on localhost
18:41 Slydder sorry. shd also does not work. explains why shd wouldn't start as well on that node
18:45 ndevos thats expected, they all use the same code for starting - xlators/mgmt/glusterd/src/glusterd​-utils.c:glusterd_nodesvc_start()
18:46 chirino joined #gluster
18:46 Slydder which component should I choose?
18:46 ndevos glusterd, and explain that in that ^ function the --volfile-server=... option should get added
18:47 ndevos well, not really added, the "-s" option should get the correct hostname/ip of the transport.socket.bind-address
18:48 ndevos glusterd_nodesvc_start() calls runner_add_args( ... , "-s", "localhost", ...) and that should get fixed
18:53 Slydder https://bugzilla.redhat.co​m/show_bug.cgi?id=1149857
18:53 glusterbot Bug 1149857: urgent, unspecified, ---, gluster-bugs, NEW , Option transport.socket.bind-address ignored
18:57 ndevos Slydder: just to be clear, how do you set transport.socket.bind-address? in glusterd.vol or with the gluster command?
18:58 ndevos dont tell me here, add it to the bug :)
19:00 glusterbot New news from newglusterbugs: [Bug 1149857] Option transport.socket.bind-address ignored <https://bugzilla.redhat.co​m/show_bug.cgi?id=1149857>
19:00 longshot902 joined #gluster
19:01 Slydder glusterd.vol
19:01 Slydder lol
19:03 ndevos Slydder: do you think this should be a blocker issue for 3.5.3?
19:05 Slydder of course. otherwise it must bind to 0.0.0.0 which is an obvious security risk whether or not you have acl's
19:05 Slydder any port open to the world is a risk.
19:07 Slydder the only thing I can do atm is let it bind to 0.0.0.0 and block it with iptables and hope it holds until 3.5.3 is out with a fix. even if I use fuse I still have no shd when bound to another address.
19:08 ndevos okay, I'll mark it like that then :)
19:09 Slydder kk.
19:11 cjanbanan joined #gluster
19:12 plarsen joined #gluster
19:15 Slydder another option would be to bind automatically to localhost and the bind address. that way you always have the localhost available which is good as well as the option of binding to a certain address.
19:15 Slydder ndevos: ^^
19:24 Slydder so. outta here for tonight. have fun and thanks for the help. later.
19:24 Slydder left #gluster
19:34 georgeh joined #gluster
19:39 lmickh joined #gluster
19:41 justinmburrous joined #gluster
19:56 cjanbanan joined #gluster
19:56 bene joined #gluster
20:14 n-st joined #gluster
20:22 edwardm61 joined #gluster
20:29 cjanbanan joined #gluster
20:30 glusterbot New news from newglusterbugs: [Bug 1133073] High memory usage by glusterfs processes <https://bugzilla.redhat.co​m/show_bug.cgi?id=1133073>
20:42 justinmburrous joined #gluster
20:48 vipulnayyar joined #gluster
20:49 jontesehlin left #gluster
20:50 justinmburrous joined #gluster
20:56 vipulnayyar joined #gluster
21:03 anotheral left #gluster
21:06 justinmburrous joined #gluster
21:13 dtrainor joined #gluster
21:23 atrius joined #gluster
21:36 edong23 joined #gluster
21:45 diegows joined #gluster
21:49 Pupeno joined #gluster
21:56 cjanbanan joined #gluster
21:57 jbrooks joined #gluster
21:58 elico joined #gluster
22:01 theron joined #gluster
22:17 n-st joined #gluster
22:21 Pupeno joined #gluster
22:29 ninkotech__ joined #gluster
22:31 glusterbot New news from newglusterbugs: [Bug 928781] hangs when mount a volume at own brick <https://bugzilla.redhat.com/show_bug.cgi?id=928781>
22:31 refrainblue joined #gluster
22:33 refrainblue joined #gluster
22:56 cjanbanan joined #gluster
22:57 gildub joined #gluster
23:07 justinmburrous joined #gluster
23:14 jbrooks joined #gluster
23:19 chirino_m joined #gluster
23:21 jbrooks joined #gluster
23:26 cjanbanan joined #gluster
23:27 Pupeno joined #gluster
23:36 justinmburrous joined #gluster
23:44 elico joined #gluster
23:56 cjanbanan joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary