Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-03-19

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:00 lifeofguenter joined #gluster
00:01 luis_silva joined #gluster
00:02 lifeofgu_ joined #gluster
00:06 lifeofguenter joined #gluster
00:08 lifeofgu_ joined #gluster
00:10 lifeofg__ joined #gluster
00:10 Pupeno_ joined #gluster
00:12 lifeofguenter joined #gluster
00:12 prilly_ joined #gluster
00:15 lifeofgu_ joined #gluster
00:19 lifeofguenter joined #gluster
00:20 theron joined #gluster
00:22 lifeofgu_ joined #gluster
00:22 badone joined #gluster
00:23 lifeofg__ joined #gluster
00:25 lifeofguenter joined #gluster
00:28 T3 joined #gluster
00:31 lifeofguenter joined #gluster
00:33 lifeofgu_ joined #gluster
00:33 _zerick_ joined #gluster
00:33 _zerick_ joined #gluster
00:35 lifeofg__ joined #gluster
00:35 dgandhi joined #gluster
00:39 lifeofguenter joined #gluster
00:40 lifeofgu_ joined #gluster
00:42 lifeofg__ joined #gluster
00:44 bala joined #gluster
00:46 lifeofguenter joined #gluster
00:48 lifeofgu_ joined #gluster
00:50 lifeofg__ joined #gluster
00:52 hagarth joined #gluster
00:52 lifeofguenter joined #gluster
00:54 lifeofgu_ joined #gluster
00:55 topshare joined #gluster
00:56 lifeofg__ joined #gluster
00:58 lifeofguenter joined #gluster
01:07 topshare_ joined #gluster
01:07 lifeofgu_ joined #gluster
01:09 lifeofg__ joined #gluster
01:13 lifeofguenter joined #gluster
01:14 jermudgeon joined #gluster
01:15 lifeofgu_ joined #gluster
01:17 lifeofg__ joined #gluster
01:19 DV joined #gluster
01:21 lifeofgu_ joined #gluster
01:23 lifeofg__ joined #gluster
01:24 plarsen joined #gluster
01:26 lifeofguenter joined #gluster
01:29 T3 joined #gluster
01:30 lifeofgu_ joined #gluster
01:32 lifeofguenter joined #gluster
01:34 lifeofg__ joined #gluster
01:36 lifeofgu_ joined #gluster
01:38 lifeofguenter joined #gluster
01:39 Prilly joined #gluster
01:40 lifeofg__ joined #gluster
01:41 deniszh joined #gluster
01:42 lifeofgu_ joined #gluster
01:44 lifeofguenter joined #gluster
01:49 lifeofguenter joined #gluster
01:52 lifeofgu_ joined #gluster
01:55 lifeofguenter joined #gluster
01:56 punit_ hagarth, did you got any luck on my issue ??
01:57 punit_ hagarth,please try to help me to resolve this issue...if not then i will switch from gluster to any other stable storage technologies
01:57 lifeofgu_ joined #gluster
01:57 punit_ hagarth, it seems gluster is not stable for the production use ??
01:57 punit_ all my VM
01:58 punit_ are down from last 3 days...
02:00 prilly joined #gluster
02:01 DV__ joined #gluster
02:03 lifeofguenter joined #gluster
02:05 lifeofgu_ joined #gluster
02:09 lifeofguenter joined #gluster
02:11 _polto_ joined #gluster
02:12 lifeofgu_ joined #gluster
02:13 nangthang joined #gluster
02:15 lifeofguenter joined #gluster
02:18 jmarley joined #gluster
02:20 lifeofguenter joined #gluster
02:22 lifeofgu_ joined #gluster
02:24 lifeofg__ joined #gluster
02:26 lifeofguenter joined #gluster
02:29 haomaiwa_ joined #gluster
02:30 lifeofgu_ joined #gluster
02:38 badone joined #gluster
02:47 ilbot3 joined #gluster
02:47 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
02:56 glusterbot News from newglusterbugs: [Bug 1203506] Bad Volume Specification | Connection Time Out <https://bugzilla.redhat.co​m/show_bug.cgi?id=1203506>
02:59 gem joined #gluster
03:18 T3 joined #gluster
03:18 harish_ joined #gluster
03:25 bharata-rao joined #gluster
03:36 sputnik13 joined #gluster
03:42 shubhendu joined #gluster
03:45 hagarth joined #gluster
03:50 hagarth punit_: we have been trying our best to help what apparently could be a configuration issue at your end. please do not make assertions or demand support in this channel, the community is driven by volunteers and we offer help to the best extent that other constraints allow us.
03:53 hagarth punit_: raghu has been looking into your problem and he will offer help when he gets online.
03:54 hagarth punit_: also, please stop initiating private messages/ctcp ping requests. It is not a polite thing to be doing in any community, neither do we appreciate it.
03:55 itisravi joined #gluster
03:56 Pupeno joined #gluster
04:02 lpabon joined #gluster
04:10 punit_ hagarth, apologies if i become rude....i do understand that it's community and driven by volunteers...once again i am sorry if you feel it to be rude...
04:11 punit_ hagarth, i will wait for raghu updates...
04:11 nangthang joined #gluster
04:12 RameshN joined #gluster
04:17 kdhananjay joined #gluster
04:18 nthomas joined #gluster
04:19 nbalacha joined #gluster
04:19 kumar joined #gluster
04:19 kanagaraj joined #gluster
04:21 T3 joined #gluster
04:21 anoopcs joined #gluster
04:27 jiffin joined #gluster
04:33 ppai joined #gluster
04:37 p0licy joined #gluster
04:40 RameshN joined #gluster
04:40 meghanam joined #gluster
04:46 soumya joined #gluster
04:48 DV joined #gluster
04:49 nbalacha joined #gluster
04:53 SOLDIERz______ joined #gluster
04:56 schandra joined #gluster
04:59 rafi joined #gluster
05:09 smohan joined #gluster
05:19 smohan joined #gluster
05:21 ppp joined #gluster
05:23 bharata joined #gluster
05:23 lalatenduM joined #gluster
05:26 rjoseph joined #gluster
05:30 ashiq joined #gluster
05:31 Manikandan joined #gluster
05:40 karnan joined #gluster
05:41 Apeksha joined #gluster
05:43 overclk joined #gluster
05:43 hgowtham joined #gluster
05:54 Bhaskarakiran joined #gluster
05:54 anil_ joined #gluster
05:59 raghu joined #gluster
06:00 gem joined #gluster
06:01 raghu punit_: sorry. yesterday I was not able to spend more time on the issue u mentioned.
06:01 punit_ raghu,it's ok...i can understnd ....
06:01 punit_ raghu,is it good time to have a look at my issue
06:02 raghu punit_: yeah. I will take a look now. Can I get these info? 1) brick logs 2) client logs 3) glusterd logs 4) o/p of gluster volume info <volume name> AND gluster volume status <volume name> 5) Also if possible vdsm logs
06:02 karnan joined #gluster
06:02 punit_ raghu,still i am getting connection timeout...even i restarted the network services
06:05 raghu punit_: The last client logs that you gave yesterday  (the last one which you gave after trying increasing ping-timeout) did not say anything about disconnection. So I want to take a look at all the logs (bricks, glusterd, client, vdsm, volume info and volume status) and see what is going wrong
06:08 jiffin joined #gluster
06:10 vimal joined #gluster
06:11 nangthang joined #gluster
06:16 cristov joined #gluster
06:16 punit_ raghu, here is glusterinfo http://paste.ubuntu.com/10625542/
06:17 punit_ raghu,here is gluster status http://paste.ubuntu.com/10625545/
06:18 punit_ raghu, all the required logs are here http://www.filedropper.com/logs
06:18 punit_ raghu,please let me know if you are not able to download..
06:21 raghu punit_: sure. I am about to download now
06:21 kdhananjay joined #gluster
06:28 atalur joined #gluster
06:28 kshlm joined #gluster
06:39 kdhananjay joined #gluster
06:52 Rapture joined #gluster
06:56 glusterbot News from newglusterbugs: [Bug 1203557] gluster rpm build failing for snapshot scheduler install <https://bugzilla.redhat.co​m/show_bug.cgi?id=1203557>
07:05 Guest48683 joined #gluster
07:06 Guest48683 joined #gluster
07:08 meghanam joined #gluster
07:08 meghanam joined #gluster
07:15 lifeofguenter joined #gluster
07:16 ppai joined #gluster
07:16 atalur joined #gluster
07:19 rjoseph joined #gluster
07:22 hchiramm joined #gluster
07:27 punit_ raghu, are you able to download the log file ??
07:27 jtux joined #gluster
07:28 PaulCuzner joined #gluster
07:39 T3 joined #gluster
07:46 rjoseph joined #gluster
07:46 bala joined #gluster
07:47 Pupeno joined #gluster
07:50 DV joined #gluster
07:58 _polto_ joined #gluster
08:01 mbukatov joined #gluster
08:03 meghanam joined #gluster
08:04 meghanam joined #gluster
08:07 jcastillo joined #gluster
08:08 jflf joined #gluster
08:08 [Enrico] joined #gluster
08:09 T3 joined #gluster
08:16 o5k joined #gluster
08:20 atalur joined #gluster
08:20 oxae joined #gluster
08:23 hgowtham joined #gluster
08:24 DV joined #gluster
08:27 glusterbot News from newglusterbugs: [Bug 1203581] Disperse volume: No output with gluster volume heal info <https://bugzilla.redhat.co​m/show_bug.cgi?id=1203581>
08:27 anil_ joined #gluster
08:29 deepakcs joined #gluster
08:30 raghu joined #gluster
08:31 raghu punit_: can you set the ping timeout to 0 and check? If it does not work, then set it back to 100 itself (gluster volume set <volume name> network.ping-timeout 0)
08:32 SOLDIERz______ joined #gluster
08:32 punit_ raghu, ok i will set it and try again
08:35 liquidat joined #gluster
08:35 fsimonce joined #gluster
08:42 SOLDIERz________ joined #gluster
08:43 atalur joined #gluster
08:55 punit_ raghu, still the same..
08:56 punit_ raghu,tried both the same result...
08:57 glusterbot News from newglusterbugs: [Bug 1203589] Disperse volume: marker-quota.c:1284:mq_get_parent_inode_local warning messages in brick logs <https://bugzilla.redhat.co​m/show_bug.cgi?id=1203589>
08:58 ashiq joined #gluster
08:59 raghu punit_: ok.
08:59 tanuck joined #gluster
08:59 atalur joined #gluster
09:02 deniszh joined #gluster
09:04 deniszh1 joined #gluster
09:06 Apeksha joined #gluster
09:06 punit_ raghu, is there any more possiblity ??
09:17 _polto_ joined #gluster
09:21 SOLDIERz________ joined #gluster
09:21 ira joined #gluster
09:22 nbalacha joined #gluster
09:25 Slashman joined #gluster
09:29 Dw_Sn joined #gluster
09:29 punit_ raghu,did you find any thing else in the logs file which can cause this problem ??
09:30 raghu punit_: looking at the logs, it looks like clients are disconnecting with bricks. Can you check in bricks if there are any firewall rules which are blocking client connections?
09:32 punit_ raghu,selinux disabled and iptables has the open ports for gluster  you can fidn the iptables rules here...http://paste.ubuntu.com/10625888/
09:43 pkoro joined #gluster
09:45 Manikandan joined #gluster
09:56 deniszh joined #gluster
09:57 glusterbot News from newglusterbugs: [Bug 1177773] [RFE][HC] - Brick scaling and balancing when adding storage on hyper converged nodes. <https://bugzilla.redhat.co​m/show_bug.cgi?id=1177773>
09:57 glusterbot News from newglusterbugs: [Bug 1177775] [RFE][HC] - Brick-rebalance and brick-replace when losing host on hyper converged nodes. <https://bugzilla.redhat.co​m/show_bug.cgi?id=1177775>
09:57 glusterbot News from newglusterbugs: [Bug 1177791] [RFE][HC] - add parameters and default policies for data centers running hyper converged nodes to utilize images locality. <https://bugzilla.redhat.co​m/show_bug.cgi?id=1177791>
09:57 glusterbot News from newglusterbugs: [Bug 1203629] DHT:Quota:- brick process crashed after deleting .glusterfs from backend <https://bugzilla.redhat.co​m/show_bug.cgi?id=1203629>
09:59 jbrooks joined #gluster
10:00 nshaikh joined #gluster
10:02 gildub joined #gluster
10:04 punit_ raghu, but i can telnet all the storage nodes on the port 24007
10:09 nbalacha joined #gluster
10:10 dusmant joined #gluster
10:12 frakt logrotate isn't rotating my glusterfs logs, is there a known problem or fix to this? Using glusterfs 3.4.6 on Debian
10:13 R0ok_ joined #gluster
10:14 PaulCuzner joined #gluster
10:19 W_v_D_ joined #gluster
10:19 kovshenin joined #gluster
10:21 ndevos frakt: that could be related to bug 1158923, but I do not know which logrotate files are installed by the Debian package
10:21 glusterbot Bug https://bugzilla.redhat.com:​443/show_bug.cgi?id=1158923 high, medium, 3.4.6, bugs, MODIFIED , glusterfs logrotate config file pollutes global config
10:22 deniszh joined #gluster
10:24 nbalacha joined #gluster
10:24 W_v_D_ Hi all
10:25 W_v_D_ I'm having a problem connecting to the 3.5 ubuntu PPA
10:25 W_v_D_ Is it just me? Or is anyone else having this problem also?
10:27 glusterbot News from newglusterbugs: [Bug 1203637] Disperse volume: glfsheal crashed <https://bugzilla.redhat.co​m/show_bug.cgi?id=1203637>
10:28 Manikandan joined #gluster
10:28 raghu joined #gluster
10:29 hgowtham joined #gluster
10:31 ndevos ~ppa | W_v_D_
10:31 glusterbot W_v_D_: The official glusterfs packages for Ubuntu are available here: 3.4: http://goo.gl/M9CXF8 3.5: http://goo.gl/6HBwKh 3.6: http://goo.gl/XyYImN -- See more PPAs for QEMU with GlusterFS support, and GlusterFS QA releases at https://launchpad.net/~gluster -- contact semiosis with feedback
10:31 badone joined #gluster
10:33 smohan joined #gluster
10:34 raghu punit_: was there any firewall setting in bricks which caused those disconnections?
10:34 punit_ raghu, there is no firewall...even i tried with disable the iptables but stil same
10:36 harish_ joined #gluster
10:36 punit_ raghu, but how come it's working perfectly before host reboot...that means there is some another problem...because if the firewall rules not affecting before then how come affect now...
10:36 punit_ raghu,even i can telnet all the host on the port number 24007
10:37 raghu punit_: hmm. Its not just 24007 port. 24007 port is only for glusterd. The bricks would be running in different ports.
10:39 punit_ raghu, but even after stop the iptables it's not working
10:39 W_v_D_ ok, is was using ppa:semiosis/ubuntu-glusterfs-3.5
10:39 W_v_D_ which worked earlier this week
10:39 T3 joined #gluster
10:40 ndevos W_v_D_: yes, I think semiosis deleted it because there was the occasional confusion about which ppa to use
10:40 W_v_D_ ok, I forget where I got the instructions.
10:40 W_v_D_ thanks for the right link
10:42 Manikandan joined #gluster
10:42 hgowtham joined #gluster
10:43 raghu punit_: Did you reboot just the machine where the bricks were there or the machine where the client is running as well?
10:44 punit_ raghu,yes...and after reboot...i am unable to poweron the VM's
10:45 raghu punit_: can you list contents on the glusterfs mount point?
10:45 Pupeno joined #gluster
10:45 punit_ raghu, on the same host machines bricks and VM running
10:46 raghu punit_: ohh, it means the glusterfs client is running on one of the machines from the trusted storage pool itself, right?
10:47 punit_ raghu, yes
10:47 raghu punit_: so are u able to list the contents of the volume directly from the glusterfs client? (i.e. can you cd into the glusterfs mount point and check if you can see the contents of the volume)?
10:48 punit_ raghu, Gluster mount point details http://paste.ubuntu.com/10626687/
10:48 punit_ raghu, yes i can see all the VM disks inside the mounted partation
10:50 punit_ raghu,inside the images folder http://paste.ubuntu.com/10626703/
10:51 raghu punit_: So, it means you are able to access the contents from the mount point properly, but while accessing them (the vms) via the hypervisor its timing out, right?
10:51 punit_ raghu, Yes right...
10:52 punit_ raghu, ihave the following network topologies
10:55 punit_ raghu, ihave 6 nic on all my host servers, eth0 :- ovirtmgmt,eth1: VM private(Vm Network) network,eth2 and eth3 with bond1 and serve VM public(VM Network) network...eth4 and eth5 10G network with bond0 and serve storage(Gluster) network
10:55 punit_ ,engine server has only one nic which is connected with ovirtmgmt network switch..
11:05 pcaruana joined #gluster
11:05 Manikandan joined #gluster
11:05 hgowtham joined #gluster
11:10 firemanxbr joined #gluster
11:12 rjoseph joined #gluster
11:22 dusmant joined #gluster
11:23 raghu punit_: is restarting the volume an option for you? if so, can you please set ping-timeout to 0 and restart the volume?
11:25 meghanam joined #gluster
11:27 meghanam joined #gluster
11:31 16WAAH75C joined #gluster
11:31 64MACDD5N joined #gluster
11:34 tomased joined #gluster
11:41 SOLDIERz________ joined #gluster
11:43 Pupeno joined #gluster
11:44 rjoseph joined #gluster
11:46 dusmant joined #gluster
11:47 deniszh joined #gluster
11:50 rolfb joined #gluster
11:52 Pupeno joined #gluster
11:52 Pupeno joined #gluster
11:53 frakt ndevos: Thanks but I don't think that is the same problem as I have. I have a problem where the file doesn't rotate at all and is now 611 MB.
11:54 frakt together with a problem that causes lots of log entries, this becomes a problem
11:55 atalur joined #gluster
11:55 frakt my logrotate.d/glusterfs-common is configured to do daily, rotate 7
11:55 frakt but the file hasn't rotated since feb 24th which was probably a restart
11:57 frakt not sure if this is expected but it'
11:57 o5k_ joined #gluster
11:57 frakt it's the volume.log.1 file that is growing and there is a new file volume.log that has 0 bytes
12:01 dusmant joined #gluster
12:01 anoopcs joined #gluster
12:02 frakt uhm maybe the problem is that the logrotate.d/glusterfs-common doesn't send the hup. "postrotate
12:02 frakt [ ! -f /var/run/glusterd.pid ] || kill -HUP `cat /var/run/glusterd.pid`"
12:02 frakt that's not right, is it?
12:10 T3 joined #gluster
12:11 diegows_ joined #gluster
12:12 Apeksha joined #gluster
12:15 rjoseph joined #gluster
12:16 itisravi joined #gluster
12:25 Pupeno joined #gluster
12:32 st_ joined #gluster
12:34 theron joined #gluster
12:34 theron joined #gluster
12:34 rjoseph joined #gluster
12:35 st_ Hi all. Does anyone have experience in live upgrade glusterfs cluster from 3.5.6 to 3.6? Is this even posible to do that? I saw in maunal that it is not recomended, try to do this on our staging evironment but it seems to some issues with it.
12:38 diegows joined #gluster
12:39 jmarley joined #gluster
12:41 Norky joined #gluster
12:41 bene2 joined #gluster
12:42 SOLDIERz________ joined #gluster
12:48 frakt ndevos: found another bug report which explains my problem and a workaround: https://bugzilla.redhat.com/show_bug.cgi?id=949706
12:48 glusterbot Bug 949706: unspecified, medium, ---, bugs, NEW , Log-rotate needs to hup
12:49 LebedevRI joined #gluster
12:52 Norky joined #gluster
12:53 wkf joined #gluster
12:56 ndevos frakt: oh, good find!
12:57 ndevos lalatenduM: you're our logrotate expert, any input/ideas/status on bug 949706 ?
12:57 glusterbot Bug https://bugzilla.redhat.com​:443/show_bug.cgi?id=949706 unspecified, medium, ---, bugs, NEW , Log-rotate needs to hup
12:57 frakt I added the killall -HUP glusterfs line to the logrotate and I also had to manually run the killall -HUP glusterfs to get the process to start using the new log file
12:57 frakt and then logrotate works
12:58 glusterbot News from newglusterbugs: [Bug 1058300] VMs do not resume after paused state and storage connection to a gluster domain (they will also fail to be manually resumed) <https://bugzilla.redhat.co​m/show_bug.cgi?id=1058300>
12:59 lalatenduM ndevos, checking
13:02 nangthang joined #gluster
13:05 alexkit322 joined #gluster
13:05 lalatenduM ndevos, frakt I think it is fixed in master check https://github.com/gluster/glusterfs/b​lob/master/extras/glusterfs-logrotate
13:06 lalatenduM I have put the same comment in the bug
13:06 alexkit322 Hi, everybody. Does the cluster.stripe-block-size options is differ from version to version of GlusterFS? Or it's 128 KB since it appeared?
13:08 p0licy joined #gluster
13:08 ndevos lalatenduM: thanks, you can close the bug as a duplicate in that case, just check what bug was used to introduce the HUP (click 'blame' and then the commit)
13:08 lalatenduM ndevos, yup will so
13:09 alexkit322 For stripe type of volumes, all peers should have completely the same version (like 3.6.2, for example, for every node) or I can have two machines with 3.6.2 and two with 3.6.1. Or even one with 3.6.x and another 3.5.x?
13:11 T3 joined #gluster
13:12 ndevos lalatenduM++ thanks!
13:12 glusterbot ndevos: lalatenduM's karma is now 7
13:12 ndevos alexkit322: 3.6.x should be compatible, but check if you really want ,,(stripe) in the first place
13:12 glusterbot alexkit322: Please see http://joejulian.name/blog/sho​uld-i-use-stripe-on-glusterfs/ about stripe volumes.
13:17 lalatenduM ndevos, http://bugs.gluster.com/cgi-bin​/bugzilla3/show_bug.cgi?id=2053 was the bug , and the web page is not available now :)
13:17 glusterbot Bug 2053: medium, medium, ---, dkl, CLOSED WORKSFORME, JVM from Kaffe is not working well (sparc and intel)
13:20 lalatenduM ndevos, nevermind I found another appropriate bug 1126788
13:20 glusterbot Bug https://bugzilla.redhat.com:​443/show_bug.cgi?id=1126788 unspecified, unspecified, ---, kkeithle, POST , glusterfs.spec: deprecate *.logrotate files in dist-git in favor of the upstream logrotate files:
13:20 dgandhi joined #gluster
13:22 ndevos lalatenduM: I think all the old gluster.com bugs are available as an alias, like (untested) https://bugzilla.redhat.com/​show_bug.cgi?id=GLUSTER-2053
13:22 glusterbot Bug GLUSTER: could not be retrieved: InvalidBugId
13:22 ndevos well, glusterbot, you should learn about non-numeric bug id's
13:23 lalatenduM ndevos, cool :) ndevos++
13:23 glusterbot lalatenduM: ndevos's karma is now 11
13:24 frakt do you have any idea about why my gluster client is logging this up towards 300 times per minute btw?
13:24 jmarley joined #gluster
13:24 frakt [2015-03-19 13:21:00.933844] I [dht-common.c:1000:dht_lookup_everywhere_done] 0-storage-dht: STATUS: hashed_subvol storage-replicate-0 cached_subvol null
13:24 * ndevos does not know about that, and would ask shyam (who's not online yet?)
13:27 athinkingmeat joined #gluster
13:28 glusterbot News from resolvedglusterbugs: [Bug 949706] Log-rotate needs to hup <https://bugzilla.redhat.com/show_bug.cgi?id=949706>
13:30 chirino joined #gluster
13:30 hamiller joined #gluster
13:32 georgeh-LT2 joined #gluster
13:33 luis_silva joined #gluster
13:35 nangthang joined #gluster
13:35 o5k joined #gluster
13:35 punit_ raghu, already tried with that option also
13:35 punit_ raghu, but no luck
13:40 deniszh joined #gluster
13:41 athinkingmeat left #gluster
13:44 topshare joined #gluster
13:52 RayTrace_ joined #gluster
13:56 lpabon joined #gluster
14:01 marbu joined #gluster
14:04 dusmant joined #gluster
14:06 Pupeno joined #gluster
14:12 _Bryan_ joined #gluster
14:19 jiku joined #gluster
14:20 SOLDIERz________ joined #gluster
14:22 semiosis Yeah those PPAs were doing more harm than good
14:22 semiosis Had to get rid of them
14:23 semiosis I feel a little bit bad for the people who were using them in automated deployments, one contacted me about it already, but they shouldn't have been doing that in the first place
14:23 semiosis If you depend on a package for your production infrastructure you should host it yourself, either on your own APT repo, or at the very least a PPA under your own account.
14:24 xiu Hi, I have a rebalance that seems to be blocked on a node after 300 files scanned using gluster 3.3.2, I don't have any errors in the logs, is there something to look for in particular about rebalances?
14:24 semiosis xiu: look into upgrading to a current release of glusterfs
14:24 xiu yeah :) can't do right now
14:25 semiosis some day
14:25 xiu hope so
14:27 T3 joined #gluster
14:27 renss78 joined #gluster
14:32 shubhendu joined #gluster
14:34 T3 joined #gluster
14:34 jbrooks joined #gluster
14:35 wushudoin| joined #gluster
14:39 renss78 Hi, I’m doing a research in different kinds of Storage Solutions.
14:39 renss78 I see many terms like: Software Defined Storage, Hyper Converged Storage Solution, Software Only Hyper Converged Storage Solutions with a Reference Architecture that got demands on specific hardware to make sure you get the best results.
14:39 renss78 Our storage solution today is traditional storage solution with the four separate well knows tiers(Network, Computing, Storage, Software) in a Storage Area Network. But the case is we can’t simply just plugin Disks to enlarge our storage, it is a typical scale up solution. We just want to add more storage without the dramatically changes that Hyper Converged Storage brings along, they claim
14:39 renss78 that they add all tiers in one place.
14:39 renss78 How do I need see GlusterFS in the picture above? Does it still works with the separate tiers of Network, Compute and Storage? So if I have for example a Dell Equalogic can I re-use that Storage Array with the Gluster Software Based solution? And in fact just add a simple commodity pc with 1TB along with the compute power it carries to my Storage Area Network?
14:40 renss78 sry for the long question
14:40 plarsen joined #gluster
14:43 lalatenduM renss78, I feel your question is more suitable for a mail to gluster-users mailing list
14:44 renss78 Yes im sry
14:44 renss78 I just added myself to the Mailing list and sended a mail
14:45 renss78 http://www.gluster.org/pipermail/gl​uster-users/2015-March/021145.html
14:49 timbyr_ joined #gluster
14:53 Gill joined #gluster
14:54 Gill left #gluster
14:54 kdhananjay joined #gluster
14:58 punit_ joined #gluster
15:01 smohan joined #gluster
15:02 mbukatov joined #gluster
15:03 kkeithley_ ,,(ppa)
15:03 glusterbot The official glusterfs packages for Ubuntu are available here: 3.4: http://goo.gl/M9CXF8 3.5: http://goo.gl/6HBwKh 3.6: http://goo.gl/XyYImN -- See more PPAs for QEMU with GlusterFS support, and GlusterFS QA releases at https://launchpad.net/~gluster -- contact semiosis with feedback
15:05 o5k Hi! After having got good knowledge with gluster, I want to make s.th useful with it, I thought about live migration of VMs, and benefiting from gluster's power to make a good solution
15:05 o5k I found two possible solutions: 1)use glusterfs as a storage backend with fuse client, or 2)using libgfapi and use gluster, not as file system but, as bloc storage system
15:05 o5k The aim now is to see which solution is better, knowing that if I choose 1) I will be able to use XEN for virtualization and Live Migration of VMs, if i choose 2) I will be obliged to use KVM for that matter..
15:09 sputnik13 joined #gluster
15:15 smohan joined #gluster
15:16 B21956 joined #gluster
15:19 RameshN joined #gluster
15:22 Dw_Sn_ joined #gluster
15:28 glusterbot News from newglusterbugs: [Bug 1203739] Self-heal of sparse image files on 3-way replica "unsparsifies" the image <https://bugzilla.redhat.co​m/show_bug.cgi?id=1203739>
15:33 rjoseph joined #gluster
15:36 anil_ joined #gluster
15:54 liquidat joined #gluster
15:56 hagarth joined #gluster
16:00 lifeofguenter joined #gluster
16:00 coredump joined #gluster
16:01 chirino joined #gluster
16:06 baoboa joined #gluster
16:06 lalatenduM joined #gluster
16:08 T3 joined #gluster
16:18 dbruhn joined #gluster
16:22 overclk joined #gluster
16:25 T3 joined #gluster
16:28 gnudna joined #gluster
16:33 nangthang joined #gluster
16:35 jermudgeon is anyone actually using disperse in the real world?
16:52 ttkg jermudgeon - I'm not yet, but am planning on testing it soon(ish).  It's a feature I've been waiting for with great anticipation.
16:54 o5k_ joined #gluster
16:54 * ttkg would rather build out stripes with +3 redundancy any day than stripes of three-way mirrors
16:56 deniszh joined #gluster
16:59 bennyturns joined #gluster
17:02 o5k_ joined #gluster
17:04 bala joined #gluster
17:06 jmarley joined #gluster
17:06 sputnik13 joined #gluster
17:09 jermudgeon ttkg: yeah, I’m wanting to have alternatives to hardware raid, and checksumming/parity is needed, hence my interest in disperse
17:12 coredump joined #gluster
17:15 bennyturns joined #gluster
17:18 deniszh joined #gluster
17:20 Rapture joined #gluster
17:24 eberg joined #gluster
17:24 wushudoin| joined #gluster
17:27 eberg I’m creating a gluster cluster in AWS on a AMI instance and am running into problems with some of the package dependencies.  Specifically with librdmacm and libibverbs not being available in the glusterfs-epel repo.  Has anyone run into this and resolved it?  Might just be easier to do this on Ubuntu (i know gluster is owned by RH) instances, since that’s what I’ve got working on our network.
17:32 p0licy eberb: are using epel6 or epell7?
17:32 p0licy has anyone got this error OSError: [Errno 16] Device or resource busy when using geo-replication
17:36 ndevos eberg: you do not have to install the glusterfs-rdma package :)
17:55 eberg @p0licy, I’m using epel6
17:57 eberg left #gluster
17:57 eberg joined #gluster
18:01 lpabon joined #gluster
18:06 gnudna hey guys on centos 7 in permissive more i see the following 0-management: readv on /var/run/7b2cae56f9420e298a8041fc85c83282.socket failed (Invalid argument) in etc-glusterfs-glusterd.vol.log
18:06 gnudna permissive being selinux
18:08 Alpinist joined #gluster
18:13 wushudoin| joined #gluster
18:19 Pupeno joined #gluster
18:22 o5k_ joined #gluster
18:23 gnudna the above gets written to log file every 3 seconds
18:33 eberg @ndevos, glusterd is dying when I try to do a peer probe.  The only apparent errors were related to rmda, but i think it’s actually a security group thing.
18:42 DV joined #gluster
18:42 eberg it was a firewall issue.  the peer probe could not reach the specified peer and died without being clear about that.
18:59 deniszh joined #gluster
19:01 _polto_ joined #gluster
19:01 virusuy joined #gluster
19:03 coredump joined #gluster
19:07 jobewan joined #gluster
19:32 roost joined #gluster
19:35 oxae joined #gluster
19:48 JustinClift hagarth: Any idea who works on the peer probe code?  eberg's 2 lines about 1 hr ago indicate it's giving not-great info on connection failure.  Doesn't sound like a hard fix. ;)
20:04 gildub joined #gluster
20:16 papamoose joined #gluster
20:17 papamoose joined #gluster
20:27 DV joined #gluster
20:55 p0licy does anyone recommend any monitoring tools?
20:57 rotbeard joined #gluster
21:01 elitecoder joined #gluster
21:03 _polto_ joined #gluster
21:04 elitecoder Any bugs that would cause files to disappear?
21:05 luis_silva joined #gluster
21:09 bene2 joined #gluster
21:19 gnudna left #gluster
21:30 elitecoder If GlusterFS is using a block device directly, what FS does it use?
21:30 elitecoder gluster-prod-a:/files on /mnt/gluster type fuse.glusterfs (rw,default_permissions,al​low_other,max_read=131072)
21:32 wkf joined #gluster
21:35 gildub joined #gluster
21:43 badone joined #gluster
21:51 dgandhi joined #gluster
21:52 dgandhi joined #gluster
21:53 dgandhi joined #gluster
22:12 ttkg p0licy - Nagios (and Icinga, which is a Nagios fork) is pretty much -the- industry standard for monitoring distributed systems.
22:13 ttkg p0licy - http://exchange.nagios.org/directory/Plugins/Sys​tem-Metrics/File-System/GlusterFS-checks/details
22:15 ira joined #gluster
22:23 badone_ joined #gluster
22:43 badone_ joined #gluster
22:47 badone__ joined #gluster
22:49 plarsen joined #gluster
22:55 sputnik13 joined #gluster
22:59 elitecoder JoeJulian: I have a theoretical question. If XFS were to lose a file with two gluster nodes in a replication config, would self-healing result in deletion or duplication?
23:02 LebedevRI joined #gluster
23:02 elitecoder (One node's underlying filesystem, XFS loses a file)
23:06 Gill joined #gluster
23:13 wkf joined #gluster
23:13 elitecoder Also is GlusterGS compatible with EXT4 now?
23:19 Leildin joined #gluster
23:25 bit4man joined #gluster
23:30 glusterbot News from resolvedglusterbugs: [Bug 762410] chattr: Function not implemented <https://bugzilla.redhat.com/show_bug.cgi?id=762410>
23:46 elitecoder bbl
23:53 bala joined #gluster
23:56 Leildin joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary