Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-10-29

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:21 DV joined #gluster
01:01 David_H_Smith joined #gluster
01:06 David_H_Smith joined #gluster
01:09 David_H_Smith joined #gluster
01:13 glusterbot New news from newglusterbugs: [Bug 1158262] Rebalance failed to rebalance files <https://bugzilla.redhat.com/show_bug.cgi?id=1158262>
01:14 David_H_Smith joined #gluster
01:23 David_H_Smith joined #gluster
01:26 David_H_Smith joined #gluster
01:43 haomaiwang joined #gluster
01:45 haomaiwa_ joined #gluster
01:51 David_H_Smith joined #gluster
01:53 David_H_Smith joined #gluster
01:54 DV joined #gluster
01:55 ira joined #gluster
01:56 meghanam joined #gluster
01:57 meghanam_ joined #gluster
02:14 16WAAC2RL joined #gluster
02:14 RameshN joined #gluster
02:14 sage__ joined #gluster
02:21 harish joined #gluster
02:35 calisto joined #gluster
02:37 David_H_Smith joined #gluster
02:40 David_H_Smith joined #gluster
02:50 calisto1 joined #gluster
02:51 David_H_Smith joined #gluster
02:55 David_H_Smith joined #gluster
02:56 smallbig joined #gluster
03:04 David_H_Smith joined #gluster
03:16 gildub joined #gluster
03:17 David_H_Smith joined #gluster
03:17 guntha_ joined #gluster
03:21 bala joined #gluster
03:24 smallbig raspberry_pi
03:24 smallbig oops
03:27 msmith joined #gluster
03:29 smallbig has any one try to build gluster with raspberry pi?:)
03:33 lalatenduM joined #gluster
03:38 rjoseph joined #gluster
03:39 hflai joined #gluster
03:53 shubhendu joined #gluster
03:53 kanagaraj joined #gluster
03:56 itisravi joined #gluster
03:56 ira joined #gluster
04:02 hagarth joined #gluster
04:07 RameshN joined #gluster
04:15 chirino joined #gluster
04:33 kdhananjay joined #gluster
04:38 jiffin joined #gluster
04:40 msmith joined #gluster
04:41 anoopcs joined #gluster
04:41 atinmu joined #gluster
04:46 JoeJulian smallbig: yes, actually, it's been done.
04:49 rafi1 joined #gluster
04:49 Rafi_kc joined #gluster
04:50 sahina joined #gluster
04:57 meghanam joined #gluster
04:57 meghanam_ joined #gluster
05:00 ndarshan joined #gluster
05:01 ppai joined #gluster
05:07 lalatenduM joined #gluster
05:20 kshlm joined #gluster
05:21 MrAbaddon joined #gluster
05:33 David_H_Smith joined #gluster
05:34 Humble joined #gluster
05:34 David_H_Smith joined #gluster
05:36 aravindavk joined #gluster
05:36 atalur joined #gluster
05:39 karnan joined #gluster
05:43 soumya joined #gluster
05:44 glusterbot New news from newglusterbugs: [Bug 1158067] Gluster volume monitor hangs glusterfsd process <https://bugzilla.redhat.com/show_bug.cgi?id=1158067>
05:47 ramteid joined #gluster
05:50 saurabh joined #gluster
05:52 dusmant joined #gluster
06:01 calisto joined #gluster
06:05 overclk joined #gluster
06:06 kshlm joined #gluster
06:06 kshlm joined #gluster
06:11 kshlm joined #gluster
06:14 nishanth joined #gluster
06:27 kumar joined #gluster
06:40 bala1 joined #gluster
06:40 raghu joined #gluster
06:45 Philambdo joined #gluster
06:50 rgustafs joined #gluster
07:02 ctria joined #gluster
07:03 Eric_HOU joined #gluster
07:07 ricky-ti1 joined #gluster
07:15 aravindavk joined #gluster
07:15 atinmu joined #gluster
07:17 RameshN joined #gluster
07:18 kedmison joined #gluster
07:18 Slydder joined #gluster
07:29 andreask joined #gluster
07:31 Fen1 joined #gluster
07:34 aravindavk joined #gluster
07:34 atinmu joined #gluster
07:38 LebedevRI joined #gluster
07:38 lmickh joined #gluster
07:40 deepakcs joined #gluster
08:03 Fen1 joined #gluster
08:07 Slydder morning all
08:11 Thilam 'morning :)
08:12 rwheeler joined #gluster
08:23 RioS2 joined #gluster
08:24 rjoseph joined #gluster
08:31 cjanbanan joined #gluster
08:44 R0ok_ joined #gluster
08:46 Slydder you know. it nice that gluster has this built-in profiling and performance measurements capabilities and all but it would be nice if the output could actually be documented somewhere.
08:48 ivok joined #gluster
08:50 Slydder somehow I have to reduce the latency on LK's (whatever those are. not sure because it isn't documented), FSYNC's (same), creates and reads
08:50 dusmant joined #gluster
08:50 ndarshan joined #gluster
08:50 nishanth joined #gluster
09:03 liquidat joined #gluster
09:08 haomaiwa_ joined #gluster
09:09 rgustafs joined #gluster
09:11 Thilam please, someone could tell me when the 3.5.3 packages are planned to be release?
09:14 vikumar joined #gluster
09:21 atalur joined #gluster
09:26 ndarshan joined #gluster
09:36 Eric_HOU left #gluster
09:39 haomaiw__ joined #gluster
09:44 ivok joined #gluster
09:46 RameshN joined #gluster
09:47 deniszh joined #gluster
09:47 nishanth joined #gluster
09:51 karnan joined #gluster
09:53 ricky-ticky joined #gluster
09:55 dogmatic69 joined #gluster
09:56 dogmatic69 hi all, I have a cluster of 3 servers running gluster to share data. One server has just started disconnecting with something like 'Transport endpoint is not connected' in the logs
09:56 dogmatic69 After restarting and remounting gluster it stops after a couple minutes again
09:57 Humble joined #gluster
10:03 haomaiwa_ joined #gluster
10:05 bala joined #gluster
10:05 getup- joined #gluster
10:07 getup- joined #gluster
10:11 getup- hi, we're running debian 7 with gluster 3.2 and we have nodes that mount from localhost. Whenever we reboot the system the mount doesn't come back though.
10:14 karnan joined #gluster
10:14 samsaffron___ joined #gluster
10:14 getup- is there anything we can do that doesn't involve custom init scripts?
10:18 stickyboy joined #gluster
10:24 R0ok_ joined #gluster
10:30 Fen3 joined #gluster
10:40 harish joined #gluster
10:47 kkeithley1 joined #gluster
10:48 RameshN joined #gluster
10:50 partner getup-: might be related: http://joejulian.name/blog/glusterfs-volumes-not-mounting-in-debian-squeeze-at-boot-time/
10:50 glusterbot Title: GlusterFS volumes not mounting in Debian Squeeze at boot time (at joejulian.name)
10:52 getup- partner: yep, I found that article too but we don't have a fuse init script to begin with. I was hoping backupvolfile-server would kick in but it doesn't seem to do that either. The init script I was referring too is the first comment on that article, which does resolve it, but I would prefer to solve it without custom init scripts.
10:52 atinmu joined #gluster
10:55 partner have you tried adding "fuse" to the /etc/modules ?
10:57 dusmant joined #gluster
10:57 getup- yes, that didn't seem to work either
10:58 partner i'm kind of guilty for that article as i started to complain about it but we settled doing a rc.local mount on every client, haven't looked into topic since but seems like its still out there :o
10:58 getup- perhaps its easier to just create a mesh and not depend on local services instead
11:01 partner maybe, it seems there are not many debian users out there on the community, thus these get resolved by each individuals as they see best
11:06 getup- looks like it
11:11 rgustafs joined #gluster
11:16 tdasilva joined #gluster
11:17 kiran joined #gluster
11:17 partner hmph, i wonder if there was some nice way of fixing the "disk layout missing / mismatching layouts" as it keeps flooding the logs repeatedly with the same content
11:20 ppai joined #gluster
11:21 partner 2.1 gigs for one month for two directories
11:23 partner 72 lines per second, had to "fix" this by adding "log-level=WARNING" to fstab but if someone could hint on how to fix it for real (maybe touching some attributes somewhere?) i'd be happy to try it out
11:24 B21956 joined #gluster
11:27 sputnik13 joined #gluster
11:28 soumya joined #gluster
11:30 atalur joined #gluster
11:30 mojibake joined #gluster
11:30 partner and broken logrotation makes it a bit worse as the logs never get compressed nor rotated away
11:32 calisto joined #gluster
11:32 Debolaz Logging is probably the main weak point of Gluster atm.
11:35 Slashman joined #gluster
11:43 ppai joined #gluster
11:45 edward1 joined #gluster
11:49 davemc community meeting starts in 15 minutes. Please join us on the #gluster-meeting channel
11:51 virusuy joined #gluster
11:51 virusuy joined #gluster
11:51 calisto joined #gluster
11:55 meghanam_ joined #gluster
11:56 meghanam joined #gluster
11:59 meghanam joined #gluster
11:59 meghanam_ joined #gluster
11:59 overclk joined #gluster
12:01 kanagaraj joined #gluster
12:04 jdarcy joined #gluster
12:04 atinmu joined #gluster
12:06 calisto joined #gluster
12:06 FarbrorLeon joined #gluster
12:13 partner oh, i thought the meeting was yesterday :o
12:13 hagarth partner: yesterday was the bug triage meeting :O
12:17 lpabon joined #gluster
12:20 anoopcs joined #gluster
12:22 theron joined #gluster
12:22 itisravi_ joined #gluster
12:33 chirino joined #gluster
12:37 partner oh, true da :)
12:37 partner had a reminder for myself but as i was on sickleave i chose not to join, i'm not an asset there anyways and can raed the memo afterwards :o
12:38 hagarth partner: every bit of participation helps, nevertheless!
12:39 partner also true, just struggling with time here :/
12:39 theron joined #gluster
12:39 mbukatov joined #gluster
12:45 glusterbot New news from newglusterbugs: [Bug 1158456] glusterfs logrotate config file pollutes global config <https://bugzilla.redhat.com/show_bug.cgi?id=1158456>
12:47 partner speaking of logs..
12:52 theron joined #gluster
12:55 theron joined #gluster
12:56 nbalachandran joined #gluster
12:57 partner hmph, is there some more silent way of monitoring certain bugs of interest other than saving the changes and making it spam people? other than "bookmark this page.."
12:59 getup joined #gluster
13:00 dusmant joined #gluster
13:01 Fen1 joined #gluster
13:02 itisravi_ left #gluster
13:03 B21956 joined #gluster
13:04 cjanbanan left #gluster
13:04 tdasilva joined #gluster
13:04 hollaus joined #gluster
13:05 theron joined #gluster
13:07 ira joined #gluster
13:10 todakure joined #gluster
13:12 hollaus Hi! Is it possible to add a non-empty brick to an existing replicating volume? Would GlusterFS be able to make use of the existing files on that new brick when self-healing the volume or would it have to delete the files anyway?
13:15 dusmant joined #gluster
13:26 bennyturns joined #gluster
13:29 sac`away joined #gluster
13:34 R0ok_ joined #gluster
13:36 jobewan joined #gluster
13:39 DavidProvencher joined #gluster
13:39 DavidProvencher Hey
13:40 DavidProvencher Anyone ever used gluster to replicate database file system (read&write) to another read only ?
13:48 rajesh joined #gluster
13:51 partner as far as i know its not recommended to run any database (except just about idle one) on top of glusterfs volume
13:51 partner that being said i'm sure there are some out there who have tried it :o
13:52 partner http://www.gluster.org/community/documentation/index.php/Basic_Gluster_Troubleshooting#15._I_am_getting_weird_errors_and_inconsistencies_from_a_database_I_am_running_in_a_Gluster_volume
13:52 sputnik13 joined #gluster
13:54 ndevos partner: bugs@gluster.org gets all bug updates...
13:54 _dist joined #gluster
13:55 ndevos partner: you could also build your own rss-feed with selected bugs
13:55 ndevos http://www.gluster.org/community/documentation/index.php/Bugzilla_Notifications may help too
14:00 partner ndevos: thanks, i just wanted to silently monitor changes on certain bugs, i probably already earlier caused unnecessary change email for you and couple of other ones on CC list..
14:01 partner i guess i'll just bookmark them and refer back manually, probably not at all interested in all the possible bugs around
14:01 sputnik13 joined #gluster
14:02 ndevos partner: I dont think anyone cares about CC notifications, feel free to add yourself to the interesting bugs
14:04 chirino joined #gluster
14:06 Thilam please, someone could tell me when the 3.5.3 packages are planned to be release?
14:06 ramteid joined #gluster
14:08 MrAbaddon joined #gluster
14:11 plarsen joined #gluster
14:14 wushudoin joined #gluster
14:14 meghanam joined #gluster
14:15 meghanam_ joined #gluster
14:17 MrAbaddon joined #gluster
14:19 partner not sure if it will be ever out, better just to grab the sources and package yourself
14:21 Maitre joined #gluster
14:22 partner depends of course which exactly platform are you talking about?
14:22 Maitre How's it going dudes?
14:28 shubhendu joined #gluster
14:35 kshlm joined #gluster
14:45 _dist Maitre: coffee
14:53 partner hmm good idea ->
14:53 Maitre :P
15:03 dberry joined #gluster
15:13 bennyturns joined #gluster
15:14 R0ok_ joined #gluster
15:25 lpabon joined #gluster
15:28 lpabon joined #gluster
15:31 overclk|afk joined #gluster
15:34 theron joined #gluster
15:38 coredump joined #gluster
15:58 chirino joined #gluster
16:02 sputnik13 joined #gluster
16:04 edward1 joined #gluster
16:06 sputnik13 joined #gluster
16:08 soumya joined #gluster
16:08 zerick joined #gluster
16:27 MrAbaddon joined #gluster
16:29 davemc Thilam, we are planning a new beta, with a couple of patches, then check readiness for a 3.5.3 release
16:29 meghanam_ joined #gluster
16:38 bazzles joined #gluster
16:40 haomaiwang joined #gluster
16:44 d4nku joined #gluster
16:44 getup joined #gluster
16:46 d4nku Hello all, can I have some recommendations/links to best practice build out of a 8 server/brick gluster setup. I have dug around the interwebs but looking for more direct opinion
16:46 d4nku Thank you a head of time.
16:48 firemanxbr joined #gluster
16:49 coredump joined #gluster
16:50 virusuy joined #gluster
16:51 lmickh joined #gluster
16:53 Pupeno_ joined #gluster
16:55 kumar joined #gluster
17:07 jobewan joined #gluster
17:08 zerick joined #gluster
17:09 Pupeno joined #gluster
17:14 rshott joined #gluster
17:15 hchiramm_ joined #gluster
17:16 meghanam_ joined #gluster
17:16 meghanam joined #gluster
17:17 haomaiwa_ joined #gluster
17:17 ron-slc joined #gluster
17:23 _dist d4nku: Why have you already decided you want 8 bricks?
17:23 ricky-ticky joined #gluster
17:25 calisto joined #gluster
17:26 JoeJulian d4nku: Start with use case and build storage to meet your needs, not the other way around.
17:26 d4nku _dist: 8 bricks total replicated. I went with 8 cause the bandwidth needs. I will have have each brick configured 2 1GbE binded.
17:27 JoeJulian Granted, sometimes that's hard until something breaks and you define a new need...
17:27 d4nku _dist: Understood.
17:28 _dist d4nku: 8 replicas will be pretty rough on 2*1gbe. I woudln't recommend it. You need 8 different locations to all be in sync?
17:28 PeterA joined #gluster
17:29 Pupeno_ joined #gluster
17:29 _dist I mean I guess if you aren't writing all that much it won't be a problem.
17:31 _dist d4nku: You're looking at best case of 32MBytes/sec of write (2048/8 = bytes, / 8 replicas). Your read speed will be impressive though
17:31 firemanxbr hi guys I'm using CentOS 7 with glusterfs 3.5.2, but when my server reboot this service glusterd.service don't go back
17:31 firemanxbr glusterd.service                                          loaded failed failed  GlusterFS, a clustered file-system server
17:31 firemanxbr any think idea about this ?
17:32 stickyboy joined #gluster
17:35 JoeJulian To quote Microsoft, "Something happened."
17:35 JoeJulian Look in the logs, /var/log/glusterfs/etc-glusterfs-glusterd.vol.log
17:37 firemanxbr JoeJulian: ohhh, thnkz, I'm seem in this post: http://blog.gluster.org/2013/12/gluster-and-not-restarting-brick-processes-upon-updates-2/, I believe that is solved.
17:39 JoeJulian I'm not sure. I try to avoid randomly stopping actively used storage.
17:41 rafi1 joined #gluster
17:45 firemanxbr JoeJulian: exactly, this post is not perfect for me
17:46 firemanxbr JoeJulian: I believe this problem is with systemd process don't start my glusterd.service before my network process
17:46 glusterbot New news from newglusterbugs: [Bug 1158614] Client fails to pass xflags for unlink call <https://bugzilla.redhat.com/show_bug.cgi?id=1158614>
17:49 coredump joined #gluster
17:50 JoeJulian Well... it should... "After=network.target"
17:51 firemanxbr JoeJulian: hum great point, I agree, my log this it: http://ur1.ca/iljbv
17:52 glusterbot Title: #146245 Fedora Project Pastebin (at ur1.ca)
17:52 d4nku _dist Correct all 8 location will be synced. I'm separating the data and replication network(jumbo frame enabled) on there own bond. I'll have two switches with a 40 Gbps stack.
17:52 samkottler joined #gluster
17:52 bene joined #gluster
17:52 _dist d4nku: you're ok with that write speed? all locations will connected to each other via 2gbps link?
17:53 d4nku _dist: correct locations will be on a 2gbps link
17:54 _dist d4nku: ok, well you'll probably get around 256 m/byte read + local brick speed if your bottleneck is network. For write you'll be lucky to stay 30+ though with that many replicas
17:56 _dist d4nku: but your original question was how to do it. I recommend a separate network for the gfs traffic, static IPs, use dns to setup the volume. RHEL recommends using xfs for your brick data fs. What's the use case?
17:56 firemanxbr JoeJulian: my systemd config file is: After=network.target rpcbind.service, but after reboot don't return service
17:56 JoeJulian Is glusterd enabled?
17:57 JoeJulian Look at "systemctl status glusterd.service" if you're not sure.
17:57 JoeJulian If it's disabled, it won't start at boot. "systemctl enable glusterd.service" to remedy that.
17:59 d4nku _dist: It will be storing millions and maybe billions of images and a lots of videos.
18:00 d4nku _dist: I'm aware that small image files will create high latency.
18:01 firemanxbr JoeJulian: my systemd glusterd.service is enabled, but I don't running glusterfsd.service, this it necessary for my brick node ?
18:01 d4nku _dist: thanks you open my eyes to a possible huge obstacle
18:02 firemanxbr JoeJulian: in my case, both options is not running after system reboot's. with glusterfsd.service enable or disable
18:02 JoeJulian no, glusterfsd.service is for legacy configurations (and I think to stop bricks before shutdown)
18:02 _dist d4nku: lots of small files only creates latency issues depending on how you access it. I think your setup will perform ok for read, but poorly for write.
18:04 firemanxbr JoeJulian: humm good information
18:05 JoeJulian This is interesting and I didn't know this. (semiosis, did you?)  You don't have to worry about starting glusterd before the network is configured. Start it as early as possible. "servers: listen on [::], [::1], 0.0.0.0 and 127.0.0.1. These pseudo-addresses are unconditionally available."
18:05 JoeJulian Since glusterd does listen on 0.0.0.0 it shouldn't matter when the network starts.
18:06 semiosis JoeJulian: i would have assumed as much, but dont recall it ever coming up
18:07 JoeJulian d4nku: make sure you tree-out your directory structure. The worst thing for latency with that many files is directory reads.
18:07 firemanxbr JoeJulian: in my configuration file(systemd path: /usr/lib/systemd/system/glusterd.service) is configured: 'After=network.target rpcbind.service', question: is possible add 'glusterfsd.service' ?
18:07 JoeJulian semiosis: I'm pretty sure even the upstart job waited for network, didn't it?
18:07 semiosis well
18:07 semiosis that's a long story
18:08 semiosis turns out even that hook was incorrect and it ended up waiting for the default timeout of 30s :(
18:08 semiosis but the motivation for that was to block the mount attempt until the interfaces were up
18:08 d4nku _dist: Gotcha.
18:08 JoeJulian firemanxbr: no. Just pretent that glusterfsd.service doesn't exist. It has no influence on starting your system.
18:08 semiosis even though glusterd would be listening on 0.0.0.0 the client's request to A.B.C.D wouldn't get routed properly until later
18:09 JoeJulian s/pretent/pretend/
18:09 glusterbot What JoeJulian meant to say was: firemanxbr: no. Just pretend that glusterfsd.service doesn't exist. It has no influence on starting your system.
18:09 d4nku JoeJulian: I do not understand "tree-out your directory structure" do you have a read/link? I was not able to find anything
18:09 JoeJulian @lucky directory tree
18:10 glusterbot JoeJulian: https://windirstat.info/
18:10 JoeJulian hmm, wasn't lucky.
18:10 ron-slc joined #gluster
18:11 firemanxbr JoeJulian: okay guy, I search other solution for system restart, but my brick is down after this restarts :(
18:11 jobewan joined #gluster
18:11 JoeJulian firemanxbr: did glusterd start?
18:12 firemanxbr JoeJulian: automatic no, only manual: 'systemctl start glusterd.service' after system reboot
18:13 JoeJulian d4nku: Just design your directory structure such that a directory doesn't have more than a couple (few?) thousand entries.
18:13 semiosis JoeJulian: tbh, i'll be glad once debuntu is on systemd.  maintaining these upstart jobs has been a royal PITA
18:13 d4nku JoeJulian: I see thanks
18:14 calisto joined #gluster
18:14 JoeJulian firemanxbr: ok. Truncate that log file I mentioned earlier. Make it fail. Go to fpaste.org and paste that log file there, create the link, and paste the link here.
18:14 _dist JoeJulian: wasn't there some fix.. I can't remember the name of it. But it fixed the latency on directory tree, I can't believe I'm failing this hard at remember the concept
18:15 JoeJulian _dist: readdirplus. Yes, it's much better than it used to be, but that's still a problem area if you're looking for performance.
18:15 _dist JoeJulian: thank you, yes that was it
18:16 firemanxbr JoeJulian: okay, I'm clean my log and reboot the server :D I hope much better :)
18:17 JoeJulian _dist: tbh, put a couple million files in a local filesystem and you can measure a difference in readdir time.
18:17 glusterbot New news from newglusterbugs: [Bug 1158622] SELinux denial when mounting glusterfs nfs volume when using base-port option <https://bugzilla.redhat.com/show_bug.cgi?id=1158622>
18:17 jbrooks that's mine :)
18:17 JoeJulian d'oh
18:17 firemanxbr JoeJulian: link for my log: http://ur1.ca/ilk35
18:18 glusterbot Title: #146260 Fedora Project Pastebin (at ur1.ca)
18:18 n-st joined #gluster
18:19 JoeJulian Ah, yes... sure enough... "[2014-10-30 02:13:51.278990] E [name.c:249:af_inet_client_get_remote_sockaddr] 0-management: DNS resolution failed on host 10.0.126.11"
18:19 JoeJulian So the server doesn't know who itself is and can't start its own bricks.
18:20 JoeJulian So why isn't network started... Are you using NetworkManager?
18:20 firemanxbr JoeJulian: but my bricks created using ips, and in my /etc/hosts all ips added
18:20 firemanxbr JoeJulian: is possible :D
18:20 JoeJulian don't. :D
18:21 firemanxbr systemctl is-enabled NetworkManager.service
18:21 firemanxbr disabled
18:21 firemanxbr JoeJulian: no, this problem, that's not my problem :D
18:21 JoeJulian ~pasteinfo | firemanxbr
18:21 glusterbot firemanxbr: Please paste the output of "gluster volume info" to http://fpaste.org or http://dpaste.org then paste the link that's generated here.
18:23 JoeJulian Oh, that makes sense.
18:24 JoeJulian Since you defined your bricks and servers with IP addresses instead of names, if the server looks at the configured addresses and can't find one that matches the bricks, it can't figure out which bricks are its own.
18:24 firemanxbr JoeJulian: all information for my brick gluster in this server: http://ur1.ca/ilka1
18:24 glusterbot Title: #146265 Fedora Project Pastebin (at ur1.ca)
18:24 lalatenduM joined #gluster
18:25 JoeJulian If you used ,,(hostnames) then you could add each servers own hostname to 127.0.0.1 in /etc/hosts and it would always be able to find itself.
18:25 glusterbot Hostnames can be used instead of IPs for server (peer) addresses. To update an existing peer's address from IP to hostname, just probe it by name from any other peer. When creating a new pool, probe all other servers by name from the first, then probe the first by name from just one of the others.
18:26 JoeJulian Since you've already defined bricks, though, there's no way to change those to hostnames. :(
18:26 JoeJulian *
18:27 JoeJulian * You can, but it involves shutting down all your servers and manually replacing IP addresses with hostnames for the files under /var/lib/glusterd/vols on all your servers.
18:31 firemanxbr JoeJulian: humm okay, is possible run this change for command line ? for example: gluster volume set [.....] ?
18:31 firemanxbr JoeJulian: is possible for me: volume replace-brick ?
18:32 theron joined #gluster
18:33 JoeJulian not really, no.
18:33 _Bryan_ joined #gluster
18:34 d4nku Can I have a recommendation of a good monitoring for gluster. All I have found is to use nagios and/or cacti.
18:34 rotbeard joined #gluster
18:35 aswin joined #gluster
18:36 _dist d4nku: we use nagios, custom script though honestly
18:36 JoeJulian Monitoring solutions are like religions. There's one for every belief system. Pick the one that give you the output you feel is most useful. I found icinga easy to read.
18:37 _dist JoeJulian: that's actually the "nagios" rewrite we are using, icinga2
18:37 JoeJulian I hear semiosis played with datadog recently and liked the 5 minutes of play he had with it.
18:38 _dist well I like that name, I'll give it that.
18:40 semiosis JoeJulian: been more than 5 minutes now, and i love it
18:41 kanagaraj joined #gluster
18:49 jbrooks Has anyone else experienced this: install glusterfs-server, start the service, and can't create volumes until after rebooting, due to an selinux denial
18:50 jbrooks (this is the 2nd selinux/gluster thing I'm puzzling about atm)
18:52 jbrooks looks like https://bugzilla.redhat.com/show_bug.cgi?id=1108448
18:52 glusterbot Bug 1108448: high, medium, ---, gluster-bugs, NEW , selinux alerts starting glusterd in f20
18:52 d4nku _dist: K thanks
18:53 JoeJulian jbrooks: lol! I was about to say that I don't recall seeing that happen.
18:53 jbrooks :)
18:54 jbrooks Seems like a reboot sets things right
18:54 jbrooks For whatever reason
18:54 JoeJulian I think it's fixed in selinux updates though. Did that get updated before the reboot maybe?
18:55 jbrooks JoeJulian, hmmm, I think I rebooted after updating
18:55 jbrooks but this is centos 7, too, so maybe it doesn't have that update
18:59 Pupeno joined #gluster
19:01 ricky-ticky joined #gluster
19:03 lpabon joined #gluster
19:07 LebedevRI joined #gluster
19:15 lpabon joined #gluster
19:19 SOLDIERz joined #gluster
19:29 Pupeno_ joined #gluster
19:32 theron joined #gluster
19:34 Pupeno joined #gluster
19:47 glusterbot New news from newglusterbugs: [Bug 1158654] [FEAT] New Style Replication (NSR) <https://bugzilla.redhat.com/show_bug.cgi?id=1158654>
19:49 SOLDIERz joined #gluster
19:49 epequeno joined #gluster
19:51 coredump joined #gluster
20:05 deniszh joined #gluster
20:21 plarsen joined #gluster
20:24 deniszh joined #gluster
20:32 sputnik13 joined #gluster
20:33 theron joined #gluster
20:43 rshott joined #gluster
20:48 SOLDIERz joined #gluster
20:49 firemanxbr joined #gluster
20:54 failshell joined #gluster
20:59 deniszh joined #gluster
21:01 drewskis1 joined #gluster
21:04 drewskis1 hello
21:04 glusterbot drewskis1: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
21:08 failshel_ joined #gluster
21:10 JoeJulian davemc: I could put together a talk for vault...
21:10 JoeJulian ... unfortunately it looks like it would have to be about migrating openstack from glusterfs to ceph... :/
21:12 JoeJulian Hehe... ironic timing for sage to have his nick correct itself.
21:12 johnmark lol
21:17 davemc JoeJulian, pffft
21:20 JoeJulian Not my choice, btw...
21:31 andreask joined #gluster
21:44 magamo joined #gluster
21:44 magamo Hello folks.  Is there a way to show the current replication value for a volume?
21:45 magamo We have a setup that was supposed to have 4 bricks, and a replication factor of two, but after trying to resize the filesystems on half of the bricks, it turns out that they were in different subvolumes, so resizing did little/nothing.
21:46 semiosis davemc: i might be interested in talking about my java/glusterfs project
21:46 magamo So we've been trying several experiments to get the cluster to a properly usable state, including kicking out bricks, and readding them (We kicked out the smaller bricks, and when we readded them, they got pulled into the same subvols as before, so no space gain was netted)
21:46 davemc semiosis, way cool
21:47 semiosis ,,(java) fyi
21:47 glusterbot https://github.com/semiosis/glusterfs-java-filesystem
21:47 magamo And then we've kicked out all but one node (decreasing replica to 1), and when trying to add in the brick of the same size, with 'replica 2', it says it added, and the size looks proper on a fuse mount with df...
21:48 magamo But when we try and rebalance, it says that the volume is not distributed, or contains only one brick.
21:49 theron joined #gluster
21:49 semiosis magamo: use 'gluster volume info' to see the bricks & replica count.  the bricks listed are in replica sets, for example if you have replica 2, and a list of 6 bricks, then the first two are a replicated pair, the next two are another, as are the last two
21:50 magamo Hrm, but it doesn't just come out and say that replica factor is two?  Might be useful information to add in there.
21:50 davemc semiosis, I seem to recal a java mention in the current survey responses
21:52 semiosis magamo: "Number of Bricks: D x R = T" D=distribute, R=replica, T=total
21:52 semiosis at least thats how it looks on 3.4.2
21:53 semiosis davemc: cool!  looking forward to seeing the final results
21:54 davemc semiosis, let me know if I can help get your proposal into the CFP system
21:54 semiosis davemc: ok thanks!
21:55 magamo So, if I have 1 x 2 = 2, that means I've got 1 set of two replicants?
21:55 magamo Alright, cool.  Makes sense, especially since the other volume we have on this cluster is Distribute, with two bricks.
21:56 magamo Is there a way, now that this is a replication factor two volume, to force the replication process to happen for the data already present?
21:57 semiosis 'gluster volume heal full'
21:57 coredump joined #gluster
21:57 magamo Hrm.  Claims the self-heal daemon is not running, to which gluster volume status disagrees.
21:58 semiosis restart glusterd/glusterfs-server
21:58 semiosis also, what version of gluster?  what distro?
21:58 magamo 3.5.2, on CentOS 6.5
21:58 magamo Installed from the glusterfs repo.
21:59 magamo Restarted the glusterd service, and tried the full heal again, and again, claims Self-heal daemon isn't running.
22:00 magamo Yet, gluster volume status claims a self heal daemon is running on all four peers in the cluster.
22:02 magamo It appears to try healing when I issue a 'gluster volume heal <volume>' instead of a 'gluster volume heal <volume> full'
22:04 drewskis1 i have a 200mill file transfer on a 3.3.1 gluster install which is taking many days to fix replica 2 in a two node cluster (2.5ghz 4 core 2 threads each, 8gb ram, ssd raid 10) 25% nodes used. the last 35gb of 580gb are barely moving at a couple mb per hr. the network is idle no swapping involved, 60% of ram is used rest is chaced and buffered to almost peak, cpu modereate if not low, nothing in logs. what should i feed this serve
22:04 drewskis1 r?
22:04 theron joined #gluster
22:04 magamo Okay, suddenly launching it with full works.
22:04 semiosis magamo: \o/
22:04 magamo But it doesn't seem to be doing anything.  Volume status claims no volume tasks, I still see no usage on the second brick.
22:05 tty00 joined #gluster
22:06 magamo Though, it does say one file on the first brick is 'possibly undergoing heal'.
22:06 magamo But, it's been saying that for a while.... And again, nothing is seeming to be getting replicated.
22:06 JoeJulian drewskis1: The default self-heal-algorithm for files that already exist on both bricks is "diff". This means that you'll only have traffic for blocks that differ.
22:10 tty00 left #gluster
22:11 bennyturns joined #gluster
22:19 magamo Erm.  Okay, that's interesting.  I'm showing every file on the volume as being in the 'heal-failed' list.
22:22 magamo How can I fix that, if at all?
22:26 drewskis1 hye its me again looking for some help Gluster 3.3.1 pretty decent servers replicating to 2nd node very slow healing(days) what should i look into RAM??
22:32 drewskis1 hey julian it sounds like diff would more cpu what would full use??
22:34 semiosis idk what the heal-failed list means, but if you could find some healing errors in logs maybe i can help
22:34 semiosis but leaving soon
22:35 magamo I'm actually at home now, and I'm probably not going to work on this until tomorrow.
22:36 magamo I'll take a look when I get to the office in the morning.
22:36 semiosis ok
22:37 magamo What I was referring to was 'gluster volume heal <volname> info heal-failed' (Or maybe it was just heal <volume> heal-failed)
22:37 MacWinner joined #gluster
22:41 drewskis1 hi is the node youre having problems with has it been completely wiped??? partially wiped??
22:41 JoeJulian magamo: The heal-failed (unless they changed it) is a log. Should have a timestamp that should be in one of the self-heal or client logs. That's the only place to find the reason for the fail.
22:42 JoeJulian drewskis1: right, full wouldn't use as much cpu (no hash computation necessary) but would use more network and, typically, will take longer.
22:47 sazze joined #gluster
22:56 sazze hi, my co-worker Andrew said someone else was having healing problems?  Not even starting??
22:58 semiosis sazze: you can get links to the channel logs by typing /topic
23:04 sazze magamo?
23:10 glusterbot New news from resolvedglusterbugs: [Bug 1155285] twitter link on community page broken <https://bugzilla.redhat.com/show_bug.cgi?id=1155285>
23:12 magamo sazze: Heya!
23:14 Pupeno joined #gluster
23:14 Pupeno joined #gluster
23:21 badone joined #gluster
23:21 Pupeno_ joined #gluster
23:32 DougBishop joined #gluster
23:34 gildub joined #gluster
23:34 DougBishop joined #gluster
23:36 badone joined #gluster
23:38 justinmburrous joined #gluster
23:56 haomaiwang joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary