Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2016-01-27

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:00 JoeJulian Sure, it'll go back to enforceing if you don't change the sysconfig file, but if you're just testing you don't mind.
00:01 cpetersen_ successful
00:01 cpetersen_ why
00:01 JoeJulian selinux enforces port usage.
00:01 cpetersen_ but it was only on one node
00:01 cpetersen_ lol
00:01 haomaiwa_ joined #gluster
00:01 cpetersen_ interesting
00:01 JoeJulian Not sure. Check audit2allow < /var/log/audit/audit.log
00:03 cpetersen_ Could a improper crypto-key setup cause that kind of problem?
00:04 JoeJulian Maybe, I guess, if a config file had the wrong context permissions.
00:04 JoeJulian But audit2allow will tell you that.
00:05 cpetersen_ I thank you for your support sir.
00:06 cpetersen_ Once again, you have been very accommodating!
00:06 JoeJulian Glad I could help.
00:46 nangthang joined #gluster
00:56 zhangjn joined #gluster
01:01 haomaiwang joined #gluster
01:05 frakt joined #gluster
01:05 Liquid-- joined #gluster
01:14 calavera joined #gluster
01:14 zhangjn joined #gluster
01:28 Lee1092 joined #gluster
01:34 harish joined #gluster
01:37 gildub joined #gluster
01:43 EinstCrazy joined #gluster
01:57 haomaiwa_ joined #gluster
02:01 haomaiwa_ joined #gluster
02:09 zhangjn joined #gluster
02:13 bennyturns joined #gluster
02:13 nishanth joined #gluster
02:17 skoduri joined #gluster
02:20 nangthang joined #gluster
02:28 harish joined #gluster
02:31 calavera joined #gluster
02:38 CyrilPeponnet joined #gluster
02:55 chirino joined #gluster
03:01 haomaiwang joined #gluster
03:33 bharata-rao joined #gluster
03:45 gem joined #gluster
03:54 chirino_m joined #gluster
03:56 itisravi joined #gluster
04:00 atinm joined #gluster
04:00 kshlm joined #gluster
04:01 haomaiwa_ joined #gluster
04:05 kdhananjay joined #gluster
04:07 overclk joined #gluster
04:07 mowntan joined #gluster
04:09 nathwill joined #gluster
04:19 nehar joined #gluster
04:21 shubhendu joined #gluster
04:23 nbalacha joined #gluster
04:23 ramteid joined #gluster
04:30 gowtham joined #gluster
04:33 kanagaraj joined #gluster
04:36 vmallika joined #gluster
04:37 karthikfff joined #gluster
04:40 ndarshan joined #gluster
04:43 spalai joined #gluster
04:53 ianchen06 joined #gluster
04:53 ianchen06 Hello
04:53 glusterbot ianchen06: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
04:54 ianchen06 I am wondering if it is possible to just change the motherboard of the server and still have Gluster working when I put the server back online.
04:54 ianchen06 Thanks a lot for your help
04:58 ianchen06 joined #gluster
04:58 ianchen06 thanks a lot
04:59 Bhaskarakiran joined #gluster
05:00 Humble joined #gluster
05:01 haomaiwa_ joined #gluster
05:03 nbalacha joined #gluster
05:05 spalai left #gluster
05:08 pppp joined #gluster
05:12 skoduri joined #gluster
05:12 zhangjn joined #gluster
05:15 hgowtham joined #gluster
05:16 Manikandan joined #gluster
05:21 EinstCrazy joined #gluster
05:22 sakshi joined #gluster
05:23 gem joined #gluster
05:23 hchiramm joined #gluster
05:24 nathwill joined #gluster
05:43 Apeksha joined #gluster
05:45 skoduri joined #gluster
05:47 rafi joined #gluster
05:48 EinstCrazy joined #gluster
05:49 anil joined #gluster
05:55 kotreshhr joined #gluster
06:01 haomaiwang joined #gluster
06:05 nishanth joined #gluster
06:07 nbalacha joined #gluster
06:07 bluenemo joined #gluster
06:09 bluenemo hi guys. I'm getting a hanging gluster volume heal gfs_fin_web info on one of my three servers, all doing replication
06:10 bluenemo I tried restarting gluster-server process on all nodes, no effect. running glusterfs 3.7.6 built on Nov  9 2015 15:17:05
06:10 vimal joined #gluster
06:10 atalur joined #gluster
06:14 JoeJulian bluenemo: Try "gluster volume start $volname force". That will force a restart on the self-heal daemons.
06:14 bluenemo Hi JoeJulian , nice to see you around! :) doesn't seem to have any effect
06:15 bluenemo replication seems to work so far - when I create /var/www/foobar, it shows up on the other nodes
06:15 JoeJulian Of course. Replication happens at the client.
06:16 JoeJulian Self-heal only comes in to play when a client can only write to less than the full complement of replicated bricks.
06:16 bluenemo no, I changed my setup a bit. three nodes now, all local gluster servers, and all using the native client to mount /var/www
06:16 bluenemo ah ok - not sure I get you correctly
06:17 bluenemo the command was working some hours ago.
06:17 JoeJulian Doesn't matter. A client is still a client, even when the computer is also a server. Think function not hardware.
06:18 bluenemo ah ok. so yes replication seems to work - but shouldnt the heal info command not hang anyway?
06:19 JoeJulian Check /var/log/glusterfs/glustershd.log on all your servers for clues.
06:19 JoeJulian Right, heal info should not hang.
06:20 bluenemo should the heal command leave some notes there when executed? tail -f?
06:20 vmallika joined #gluster
06:20 spalai joined #gluster
06:21 JoeJulian There will be log messages in cli.log on the machine where you perform the operation, log entries in all the etc-glusterfs-glusterd.vol.log files, and should be something in the glustershd.log files, yes.
06:21 bluenemo found some [2016-01-27 06:06:18.713123] E [MSGID: 114058] [client-handshake.c:1524:client_query_portmap_cbk] 0-gfs_fin_web-client-1: failed to get the port number for remote subvolume. Please run 'gluster volume status' on server to see if brick process is running. on two of the three nodes
06:23 shubhendu joined #gluster
06:23 bluenemo JoeJulian, all she wrote: http://paste.debian.net/hidden/00e7295b/
06:23 glusterbot Title: Debian Pastezone (at paste.debian.net)
06:24 bluenemo last message seems to repeat - also     E [MSGID: 114031] [client-rpc-fops.c:251:client3_3_mknod_cbk] 0-gfs_fin_web-client-2: remote operation failed. Path: (null) [permission denied] just showed up in /var/log/gluster-mount.log
06:24 JoeJulian glustershd.log is missing from that pastebin.
06:25 JoeJulian oh
06:25 JoeJulian no it's not, it's empty.
06:25 JoeJulian sorry.
06:25 bluenemo no its tail -n 0, so it only shows messages after I entered gluster volume heal gfs_fin_web info
06:25 JoeJulian And I don't know where /var/log/gluster-mount.log comes from. It's not any of the default log locations.
06:25 kovshenin joined #gluster
06:26 bluenemo last few lines of it: http://paste.debian.net/hidden/5b0bbdd2/
06:26 glusterbot Title: Debian Pastezone (at paste.debian.net)
06:26 bluenemo ah yes, i specified it in my mount options
06:27 bluenemo -o log-file=/var/log/fo.log
06:28 JoeJulian Other than the fact that it keeps getting restarted, that last log looks fine.
06:29 bluenemo yeah that was me
06:29 bluenemo tried just restarting it on all nodes
06:29 bluenemo also rebooted this one, as some apache reads from /var/www and all of those went zombie about 1 hour ago and about 6 hours ago
06:29 bluenemo not perfectly sure why
06:30 bluenemo logs only say "went zombie" basically.
06:31 ppai joined #gluster
06:31 bluenemo another node just said [afr-self-heal-common.c:651:afr_log_selfheal] 0-gfs_fin_web-replicate-0: Completed entry selfheal on 16e932f9-ebf1-4b60-bb44-c86348b1aadc. source=0 sinks=2    in glusterfshd.log, so it does seem to do some rep
06:36 jiffin joined #gluster
06:38 bluenemo JoeJulian, restarting the servers (shutdown -r now) helped. gluster rep says 0 files to replicate now. I remembered this "trick" from some month ago. Setup a new gluster for production now, running like this since about 48 hours now
06:40 JoeJulian Sounds like maybe you had a brick locked up.
06:40 JoeJulian Or a client.
06:41 bluenemo what would have been the command to find out?
06:41 bluenemo i'm monitoring if I can read and write from glusterfs, nothing of that alerted
06:41 JoeJulian Right, because your test hung.
06:42 JoeJulian You're mounting via fuse, right?
06:42 bluenemo sorry dont get that
06:42 bluenemo yes
06:42 bluenemo on all three servers
06:42 dusmant joined #gluster
06:42 JoeJulian Does this happen often?
06:42 kovshenin joined #gluster
06:43 bluenemo well I had apache goe zombie twice this night, although gfs was always "usable". 5 hours ago the heal info cmd also didnt work, but I decided to wait and see in the morning.
06:44 bluenemo i'm running this setup since 48 hours, had this problem twice now
06:44 bluenemo as in apache going Z, but i'm not perfectly sure its related to gluster
06:44 bluenemo although apache likes to become a Z if the /var/www its accessing does strange stuff
06:45 JoeJulian Ok, next time it happens, as soon as you can notice the problem (probably zombies) email the client logs to gluster-devel and describe the symptoms.
06:46 JoeJulian Just, the logs since this last reboot.
06:46 bluenemo ok, will do :)
06:46 JoeJulian I've not seen this happening, but it sure sounds filesystem related.
07:01 haomaiwang joined #gluster
07:02 nathwill joined #gluster
07:03 gem_ joined #gluster
07:03 unlaudable joined #gluster
07:03 ws2k3 joined #gluster
07:07 jiffin1 joined #gluster
07:08 skoduri_ joined #gluster
07:11 kotreshhr joined #gluster
07:11 arcolife joined #gluster
07:12 mobaer joined #gluster
07:17 Manikandan joined #gluster
07:19 bturner joined #gluster
07:22 EinstCrazy joined #gluster
07:22 sakshi joined #gluster
07:24 ashiq joined #gluster
07:25 DV joined #gluster
07:28 mhulsman joined #gluster
07:31 gem joined #gluster
07:34 jtux joined #gluster
07:35 Humble joined #gluster
07:35 hchiramm joined #gluster
07:40 Manikandan joined #gluster
07:45 zhangjn joined #gluster
07:49 [Enrico] joined #gluster
07:52 jiffin1 joined #gluster
08:01 haomaiwa_ joined #gluster
08:08 skoduri_ joined #gluster
08:09 EinstCrazy joined #gluster
08:09 karnan joined #gluster
08:11 Thomasx Hello together, i have a questen to glusterfs: I would like to build with glusterfs a stripe Cluster with 2 server. over that striped cluster i would like to bild a geo-redundancy with an other glusterfs that also contains a stripe cluster with 2 server. Is this possible? if yes can someone tell me howto set up this
08:12 bhuddah this sounds like a dangerous plan in terms of redundancy, Thomasx
08:23 ahino joined #gluster
08:23 b0p joined #gluster
08:25 Thomasx Hi bhuddah. why should redundancy be a problem, can you explain? maybe i didn't eplain ist correctly. i would like to setup to seperate glusterfs systems on two seperate networks. to sync the data on these systems i would like to setup geo-redundancy
08:26 bhuddah Thomasx: yeah. but if you stripe with two servers both have to work all the time.
08:27 Thomasx okay and if i build just one server with stripe volume on each side and sync the data then with geo-redundancy? would this work?
08:28 bhuddah i don't understand you.
08:28 bhuddah for striping you need multiple servers.
08:28 bhuddah what you probably want is replication.
08:29 zhangjn joined #gluster
08:34 kshlm joined #gluster
08:41 Thomasx okay, now i am a bit confunsed, sorry. maybe it would help if i explain what i would like to build: I would like to build a server with one stroge for backup files. For the geo-redundance i would like to implement glusterfs geo-redundance. This should sync the data to an other server with storage on a dfferent network. so i have to setup on both sides a volume with clusterfs and setup
08:41 Thomasx geo-redundance over that. But what should i do if i run out of space and have to extend the storage with a 1 server (with new storage) on both each side? Do i just need to extend the volume on each side and geo-redundance would work fine without any further canges?
08:43 bhuddah there are multiple options.
08:43 bhuddah so how many physical servers are you planning to install in total and in each location?
08:44 bhuddah do you want redundancy on each site too?
08:46 nbalacha joined #gluster
08:47 ctria joined #gluster
08:48 Thomasx no thats not nessesarry
08:48 bhuddah uh. for real?
08:48 bhuddah okay.
08:49 bhuddah then you are probably better off without gluster and using "normal" filesystems and unison or some other tool for synchronizing.
08:50 deniszh joined #gluster
08:50 shubhendu joined #gluster
08:51 dusmant joined #gluster
08:52 Pank joined #gluster
08:53 Pank Hello, I am facing a problem with vim editor when enabling trashcan functionality in glusterfs 3.7.6 version
08:54 rafi1 joined #gluster
08:55 Pank the problem is when ever I try to open a file via vim editor it throws error and open itself in read only mode
08:56 Pank somebody please help me
09:00 [Enrico] joined #gluster
09:01 haomaiwa_ joined #gluster
09:02 jiffin Pank: Can you please volume configuration ?
09:03 Pank We have 2 bricks with replica
09:03 Pank over tcp mode
09:03 jiffin Pank: trash feature is not stable for dist-rep volume in 3.7, we are planning do that by 3.8
09:04 anoopcs Pank, Are there any errors in any of the logs?
09:04 b0p joined #gluster
09:05 Pank no I am not seeing any type of error
09:05 Pank all command except vim are working fine
09:06 Pank when ever I use vim then my parent directory permission get changed to 000 mode
09:06 anoopcs Pank, especially brick logs?
09:08 ramky joined #gluster
09:08 Pank Yes anoopcs there are lots of error message in brick log
09:08 Pank Hey Anoopcs, can i paste them here?
09:09 anoopcs Pank, Use fpaste.org
09:09 anoopcs Pank, permission being changed may not be related to trash feature.
09:11 Pank please find error log here http://ur1.ca/ogdc5
09:11 glusterbot Title: #315186 Fedora Project Pastebin (at ur1.ca)
09:11 Pank also if I disable this trashcan feature then it is working fine
09:12 skoduri__ joined #gluster
09:12 Pank Please note that If I am login via root user then all is working fine but if I am login via any non-root user then I am facing this issue
09:13 Saravanakmr joined #gluster
09:15 Pank So here is my scenario, I have one volume vol1s1 mounted on client on its /home directory. There I have created few users and now whenever I logged in via any non-root user and edit any file via vim editor the it throws following error "E200: *ReadPre autocommands made the file unreadable"
09:16 Pank Hence I tried to troubleshoot this problem and found that whenever I do editing via vim editor then my home directory permission gets changed to 000 mode
09:17 Humble joined #gluster
09:18 Pank Would you please help me to figure out what is the issue here
09:19 hchiramm joined #gluster
09:20 Pank I more info, I have also enabled quota on same volume
09:21 anoopcs Pank, Reading through the logs . . . I don't see any errors from trash side.
09:22 spalai joined #gluster
09:22 zhangjn joined #gluster
09:22 Pank these are the only errors come on brick side whenever I do something via vim editor
09:24 atinm joined #gluster
09:24 anoopcs Pank, and are you sure that you don't see any of these errors when you switch off trash translator?
09:27 Pank I think these error are coming due to some other reason but still I am not able to do anything via vim editor
09:27 Pank if you want then I  can tell you hoe to reproduce this problem
09:29 anoopcs Pank, That would be great.
09:30 ekuric joined #gluster
09:31 Pank ok wait I am sharing you a link regarding same
09:34 aravindavk joined #gluster
09:34 ahino joined #gluster
09:40 dusmant joined #gluster
09:41 shubhendu joined #gluster
09:42 post-factum joined #gluster
09:43 post-factum going on with my shard questions. i know that shards are enabled on existing volumes. how it deals with existing files on that volumes? should i run rebalance or heal to make all files sharded?
09:45 Pank Hi Anoopcs,
09:46 Pank Here is the link for reproduce this scenario http://ur1.ca/ogdey
09:46 glusterbot Title: #315192 Fedora Project Pastebin (at ur1.ca)
09:47 Pank please let me know if you need anything else
09:48 Pank here is my email id <pank.singh9191@gmail.com>
09:49 kdhananjay post-factum: So sharding isn't enabled on volumes by default.
09:50 kdhananjay post-factum: When you do enable sharding on a volume with existing files, these files remain as they were -- unsharded.
09:50 MessedUpHare joined #gluster
09:51 post-factum i understand that. is there any way to make them sharded (except of rewriting them)?
09:51 MessedUpHare joined #gluster
09:52 kdhananjay post-factum: nope. You will need to create a new volume with sharding enabled, leaving the existing volume as it is. And then copy the files from the existing volume into the new volume,
09:52 kdhananjay post-factum: what's the use-case you want to use sharding for?
09:53 post-factum for backups. we have lots of big tarballs, and they are distributed unevenly across the bricks
09:54 jiffin1 joined #gluster
09:55 rafi joined #gluster
09:56 post-factum kdhananjay: here is df output: https://gist.github.com/b80c271f5679fcac3ece
09:56 pppp joined #gluster
09:56 glusterbot Title: - · GitHub (at gist.github.com)
09:57 kdhananjay post-factum: Oh ok. how big are these tarballs?
09:58 post-factum kdhananjay: up to 90G
09:58 rafi joined #gluster
09:59 shubhendu joined #gluster
10:01 haomaiwang joined #gluster
10:04 kdhananjay post-factum: ok, and what other features do you plan to use in this volume?
10:06 post-factum kdhananjay: that is just distributed-replicated volume for cold backups, nothing more. any issues with sharding?
10:06 Saravanakmr joined #gluster
10:10 Simmo joined #gluster
10:11 pank joined #gluster
10:14 Simmo Hi All! : )
10:15 Simmo I'm trying to remove a node from a replica set which consists of 3 peers.
10:16 Simmo I did it with "gluster volume remove-brick ..." and then "gluster peer detach HOSTNAME"
10:16 Simmo But still it looks to me that the files are replicated also on the removed peer
10:16 kdhananjay post-factum: no. it is built to work fine with workloads where there is a single writer to a given file. one example would be the virtual machine image store use case.
10:16 Simmo do I miss something ?
10:17 kdhananjay post-factum: in fact it has been tested and found to be working well for VM store use case.
10:17 kdhananjay post-factum: having said that, we _are_ going in the direction of making it useful for general purpose workloads.
10:18 Simmo Uh.. so what is the best way to proceed ? Brutally shutdown the machine ? : )
10:18 kdhananjay post-factum: i would love to hear your feedback on sharding with the backup use case that you intend to use it for, if you are willing to give it a try.
10:19 post-factum kdhananjay: ok, vm store is quite a reliable test to trust to. but the question with in-place conversion remains. given "file1" 100G sized, will copying to temp file and renaming it back work?
10:19 post-factum within the same volume
10:19 anoopcs pank, That's weird. Why would you do glusterfs mount at /home directly
10:20 bfm joined #gluster
10:20 kdhananjay post-factum: if you face any issues, you can ping me right here when i am around and i will answer your questions. if not, you can always send a mail on gluster-users@gluster.org
10:20 anoopcs pank, If possible can you try mounting it under some sub-directory?
10:20 kdhananjay post-factum: yes, renames would work fine. :)
10:21 post-factum kdhananjay: and what shard size should i choose? i though that something about 10G would be ok
10:21 post-factum *thought
10:22 pank We have lots of users and wanted to manages each user via quota as well as trashcan feature
10:23 pank anoopcs, yes I have tried on other places also. It is behaving same
10:23 anoopcs pank, Hm. Ok.
10:24 skoduri joined #gluster
10:24 pranithk joined #gluster
10:25 pranithk post-factum: We tested it very well for 512MB
10:25 pranithk post-factum: Nothing prevents us to change the shard size. It should work fine. But we haven't done testing with that. If you do the tests and run into problems we will be happy to help you out
10:27 post-factum pranithk: ok, thanks
10:28 pranithk post-factum: As kdhananjay was saying it works for single writer use case at the moment. We are in the process of making it general purpose. Your feedback is much appreciated!
10:28 pranithk post-factum: Do let us know how your testing goes with 10GB shards
10:29 pranithk post-factum: We would recommend you do your testing with 3.7.7 which is going to be released in a week's time
10:29 post-factum pranithk: ok, but i have to finish memory leaks test first :)
10:29 pranithk post-factum: kdhananjay made some important fixes for the bugs identified by the community users
10:29 dusmant joined #gluster
10:30 jiffin1 joined #gluster
10:30 pranithk post-factum: cool :-)
10:32 [Enrico] joined #gluster
10:32 anoopcs pank, You did glusterfs mount as root, right?
10:33 zhangjn joined #gluster
10:35 anoopcs or as normal user?
10:35 b0p joined #gluster
10:36 harish_ joined #gluster
10:36 jiffin1 joined #gluster
10:39 atinm joined #gluster
10:43 glafouille joined #gluster
10:49 ahino joined #gluster
10:50 skoduri joined #gluster
10:57 overclk joined #gluster
10:58 pank anoopcs, yes, I did it via root user only
11:01 haomaiwa_ joined #gluster
11:08 zhangjn joined #gluster
11:15 anoopcs pank, Any other operation on mount was successful?
11:15 anoopcs pank, Can a non-root user create a file via touch?
11:16 chirino joined #gluster
11:16 anoopcs pank, What are the permissions on /home directory?
11:27 anoopcs pank, I mean permissions for each user's home directory inside /home after mount.
11:28 xMopxShell joined #gluster
11:35 pank those are users specific permissions
11:37 kdhananjay joined #gluster
11:37 pppp joined #gluster
11:40 anoopcs pank, Ok.
11:40 Slashman joined #gluster
11:43 ppai joined #gluster
11:44 pank anoopcs, all other things are working fine except vim command
11:45 haomaiwang joined #gluster
11:49 jiffin1 joined #gluster
11:56 tree333 joined #gluster
11:56 atinm REMINDER : Gluster community weekly meeting to begin in ~5 mins
11:56 skoduri joined #gluster
11:58 ahino1 joined #gluster
11:58 rafi joined #gluster
12:01 Manikandan_ joined #gluster
12:08 Sam___ joined #gluster
12:08 skoduri joined #gluster
12:31 anoopcs pank, Hey, I was able to reproduce the issue. This is due to vim's style of editing existing files with the help of swap files. Thanks for reporting the issue. I will investigate more and get back to you through your specified email-id. You can now go ahead and file a bug.
12:31 glusterbot https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS
12:31 anoopcs pank, ^^
12:33 anoopcs pank, and you can see deleted swap file under /home/.trashcan/<user-name>/ .
12:33 anoopcs pank, Can you confirm this?
12:33 zhangjn joined #gluster
12:35 zhangjn joined #gluster
12:39 pank anoopcs, yes I can able to see .swp files
12:44 MessedUpHare joined #gluster
12:46 Saravanakmr joined #gluster
12:48 Debloper joined #gluster
12:50 [Enrico] joined #gluster
12:54 rafi1 joined #gluster
12:55 ppai joined #gluster
12:56 zhangjn joined #gluster
12:56 pank left #gluster
12:56 anoopcs papamoose, Did you file the bug?
12:57 ashiq joined #gluster
12:57 anoopcs papamoose, Sorry.
12:57 anoopcs wrong person
12:57 Saravanakmr joined #gluster
12:59 unclemarc joined #gluster
13:06 unclemarc joined #gluster
13:10 skoduri joined #gluster
13:11 deniszh joined #gluster
13:14 haomaiwa_ joined #gluster
13:14 anti[Enrico] joined #gluster
13:15 jeek joined #gluster
13:16 Manikandan_ joined #gluster
13:17 dusmant joined #gluster
13:18 pranithk joined #gluster
13:25 unclemarc joined #gluster
13:27 kotreshhr left #gluster
13:34 nottc joined #gluster
13:36 ira joined #gluster
13:37 mhulsman joined #gluster
13:38 shubhendu joined #gluster
13:40 arcolife joined #gluster
13:47 mhulsman joined #gluster
13:48 julim joined #gluster
13:51 chirino joined #gluster
14:01 haomaiwa_ joined #gluster
14:02 haomaiwang joined #gluster
14:02 julim_ joined #gluster
14:03 haomaiwang joined #gluster
14:04 haomaiwa_ joined #gluster
14:05 haomaiwang joined #gluster
14:06 haomaiwang joined #gluster
14:07 haomaiwang joined #gluster
14:08 haomaiwa_ joined #gluster
14:09 haomaiwa_ joined #gluster
14:10 haomaiwa_ joined #gluster
14:11 haomaiwa_ joined #gluster
14:12 haomaiwang joined #gluster
14:13 haomaiwa_ joined #gluster
14:14 haomaiwa_ joined #gluster
14:15 haomaiwang joined #gluster
14:16 EinstCrazy joined #gluster
14:16 haomaiwang joined #gluster
14:17 haomaiwang joined #gluster
14:18 haomaiwa_ joined #gluster
14:19 haomaiwang joined #gluster
14:20 haomaiwang joined #gluster
14:21 shyam joined #gluster
14:21 haomaiwang joined #gluster
14:22 haomaiwa_ joined #gluster
14:23 haomaiwang joined #gluster
14:24 haomaiwa_ joined #gluster
14:25 16WAAO5VH joined #gluster
14:26 haomaiwang joined #gluster
14:30 jtux joined #gluster
14:32 ashiq joined #gluster
14:33 mhulsman joined #gluster
14:35 post-factum joined #gluster
14:42 paraenggu joined #gluster
14:43 skylar joined #gluster
14:45 plarsen joined #gluster
14:52 Lee1092 joined #gluster
14:53 squizzi joined #gluster
14:57 nbalacha joined #gluster
14:57 bennyturns joined #gluster
14:59 ira joined #gluster
15:01 haomaiwa_ joined #gluster
15:03 shyam joined #gluster
15:07 deniszh joined #gluster
15:07 post-factum joined #gluster
15:17 squizzi joined #gluster
15:18 post-factum joined #gluster
15:25 skoduri joined #gluster
15:26 spalai joined #gluster
15:26 Saravanakmr joined #gluster
15:30 raghu joined #gluster
15:30 b0p joined #gluster
15:39 farhoriz_ joined #gluster
15:41 nehar joined #gluster
15:51 DV joined #gluster
15:59 ahino joined #gluster
16:01 7YUAARDKO joined #gluster
16:03 hagarth joined #gluster
16:03 zwevans joined #gluster
16:05 Liquid-- joined #gluster
16:13 natarej_ joined #gluster
16:14 natarej joined #gluster
16:18 nathwill joined #gluster
16:22 nickage_ joined #gluster
16:24 semiosis_ joined #gluster
16:25 semiosis joined #gluster
16:25 samikshan joined #gluster
16:26 tdasilva joined #gluster
16:30 PaulePanter joined #gluster
16:41 atinm joined #gluster
16:46 Arrfab joined #gluster
16:46 Arrfab joined #gluster
16:47 CyrilPeponnet joined #gluster
17:01 haomaiwa_ joined #gluster
17:02 bfm joined #gluster
17:03 muneerse2 joined #gluster
17:10 ivan_rossi left #gluster
17:32 hagarth joined #gluster
17:34 rafi joined #gluster
17:35 bowhunter joined #gluster
17:38 mhulsman joined #gluster
17:39 luizcpg joined #gluster
17:48 bowhunter joined #gluster
17:48 rafi1 joined #gluster
17:53 b0p joined #gluster
17:53 mhulsman joined #gluster
18:01 haomaiwa_ joined #gluster
18:14 EinstCrazy joined #gluster
18:19 ira joined #gluster
18:29 ovaistariq joined #gluster
18:31 ovaistar_ joined #gluster
18:53 mpingu Is there any Performance difference between GlusterFS on IPv4 vs IPv6
18:56 skylar joined #gluster
18:59 deniszh joined #gluster
19:01 haomaiwang joined #gluster
19:18 ira joined #gluster
19:23 21WAAW55K joined #gluster
19:23 bowhunter joined #gluster
19:36 jwang joined #gluster
19:36 calavera joined #gluster
19:39 ovaistariq joined #gluster
19:44 wushudoin joined #gluster
19:56 mhulsman joined #gluster
19:56 hagarth joined #gluster
19:58 jmarley joined #gluster
20:01 ovaistariq joined #gluster
20:01 haomaiwa_ joined #gluster
20:05 calavera joined #gluster
20:09 ovaistariq joined #gluster
20:19 nickage_ joined #gluster
20:43 Splix76 joined #gluster
20:43 Splix76 left #gluster
20:43 Splix76 joined #gluster
20:44 Splix76 Can someone tell me if there is a way to limit which Interface or Network GlusterFS uses? We are setting up a new GlusterFS and the servers can talk over two networks, but we want to limit Gluster to just the second interface.
20:45 Splix76 Server 1: 10.50.0.200 | 192.168.200.10
20:45 Splix76 Server 2: 10.60.0.200 | 192.168.200.11
20:45 Splix76 I want all replication to happen on 192.168.200.x network, but it seems to use both networks regardless
20:45 Splix76 We found that blocking outbound traffic in iptables to a specific IP works, but I would like a more elegant GlusterFS solution than blocking traffic in iptables.
20:46 Splix76 The second network is a P2P tunnel between data centers, has far more throughput and much lower latency.
20:46 Splix76 IPs are of course changed from actual networks used.
20:49 nickage_ Splix76: have you tried transport.socket.bind-address ?
20:49 Splix76 I have not, we'll give it a shot.
20:49 Splix76 thanks for the tip.
20:50 nickage_ Splix76: btw there is a bug https://bugzilla.redhat.com/show_bug.cgi?id=1149863 which was fixed in 3.7
20:50 glusterbot Bug 1149863: urgent, unspecified, ---, ndevos, CLOSED CURRENTRELEASE, Option transport.socket.bind-address ignored
20:53 Splix76 Thanks for the tip on the bug and patched version. Checking version now and will attempt the setting once we confirm version.
20:53 Splix76 Man, I do love me open source projects and IRC support. It's almost always better than closed source paid for solutions. :D
20:53 plarsen joined #gluster
20:56 Splix76 Version is over 3.7, we're testing the transport bind now.
20:56 Splix76 We parsed the docs and did not find that value. I did a search just now and came up empty. Can you link me to the official documented support for this setting?
20:58 Splix76 I usually RTFM first, but in this case I came up empty. Trying to learn more about it in case I have follow up issues that the documentation can help me sort out.
21:01 haomaiwa_ joined #gluster
21:02 bowhunter joined #gluster
21:03 julim joined #gluster
21:10 JoeJulian Splix76: use hostnames.
21:11 JoeJulian Splix76: If your hostname resolves to the ip address you want to use, and your routing tables are set up correctly, it will go over the interface you specified.
21:11 ctria joined #gluster
21:12 mhulsman joined #gluster
21:21 calavera joined #gluster
21:36 luizcpg joined #gluster
21:37 mhulsman joined #gluster
21:58 nathwill joined #gluster
22:01 haomaiwa_ joined #gluster
22:15 Splix76 JoeJulian, The issue is that the hostname is the primary adapter, and the secondary adapter is layer 2 only P2P, so it has no host name. We want to use the IP addresses. The solution proposed by nickage_ has worked for us, traffic is only on the specified address now.
22:15 Splix76 we could use /etc/hosts and define a host name for the second IP, however I would prefer to bind it to the IP address.
22:16 nathwill joined #gluster
22:20 Splix76 left #gluster
22:37 B21956 joined #gluster
23:01 haomaiwang joined #gluster
23:10 xoritor joined #gluster
23:10 xoritor ok with a distrepl setup using 5 hosts is it possible to set replica 3?
23:12 ahino joined #gluster
23:18 plarsen joined #gluster
23:24 nathwill joined #gluster
23:34 nangthang joined #gluster
23:37 ira joined #gluster
23:45 necrogami joined #gluster
23:45 necrogami joined #gluster
23:48 xoritor anyone here answer that?
23:49 xoritor i think i need a number of bricks devisible by the replica count for distrepl
23:49 xoritor ie.. 5 bricks with replica 3 does not work
23:49 xoritor or 10 bricks with a replica of 3
23:49 xoritor either way it does not work
23:52 xoritor i would need to split my 1 disk into 3 bricks for a total of 15... right?
23:53 xoritor that really will suck if i have to do that
23:54 JoeJulian xoritor: right
23:54 xoritor JoeJulian, :-(
23:54 xoritor thats what i was thinking
23:55 xoritor it just makes each partition really small
23:55 JoeJulian You can cheat and just have three brick directories.
23:55 xoritor they are 1TB ssd drives
23:55 xoritor yea that is an option huh?
23:55 xoritor ie.. 1 fs and 3 directories
23:55 xoritor i never think of it that way
23:56 JoeJulian Right. df would be wrong, but it would work.
23:56 xoritor but then what would happen on rebalancing
23:56 xoritor could it potentially see too much space
23:56 JoeJulian Oh, it's not perfect by a long shot, but it's just a possible solution.
23:57 xoritor would it work for a replica 3 on 3 bricks then add in 2 more?
23:57 xoritor i think it will complain and fail
23:58 JoeJulian If you don't think your file sizes are going to exceed approx 170ish GB, I'd just go ahead and use lvm to partition it out.
23:58 JoeJulian Right, it would complain and fail.
23:58 xoritor oh they will
23:58 xoritor i have some over 300
23:58 JoeJulian Another option, though I'm not sure if I'd be ready to recommend it for production yet, is sharding.
23:59 xoritor what about the arbiter stuff?
23:59 JoeJulian Then you can have your 5 disk replica 3.
23:59 xoritor docs?
23:59 JoeJulian arbiter's about quorum.
23:59 JoeJulian @lucky glusterfs sharding
23:59 glusterbot JoeJulian: http://www.gluster.org/community/documentation/index.php/Features/sharding-xlator
23:59 xoritor hah... i just found it

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary