Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-12-22

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:11 eryc joined #gluster
00:14 Pupeno joined #gluster
00:38 diegows joined #gluster
01:01 XpineX joined #gluster
01:04 an joined #gluster
01:12 plarsen joined #gluster
01:15 fandi joined #gluster
01:26 Pupeno joined #gluster
01:31 wgao joined #gluster
01:41 T3 joined #gluster
02:01 bala joined #gluster
02:03 harish joined #gluster
02:14 diegows joined #gluster
02:20 an joined #gluster
02:26 bala joined #gluster
02:38 haomaiwang joined #gluster
03:03 jaank joined #gluster
03:38 Arminder joined #gluster
03:47 nbalacha joined #gluster
03:51 nbalacha joined #gluster
03:52 DougBishop joined #gluster
03:54 nrcpts joined #gluster
03:57 RameshN joined #gluster
04:03 itisravi joined #gluster
04:04 nrcpts joined #gluster
04:09 atinmu joined #gluster
04:12 shubhendu joined #gluster
04:23 ndarshan joined #gluster
04:26 ppai joined #gluster
04:32 lpabon joined #gluster
04:41 anoopcs joined #gluster
04:45 gothos1 joined #gluster
04:56 soumya joined #gluster
04:59 rafi1 joined #gluster
05:01 kshlm joined #gluster
05:03 M28_ joined #gluster
05:04 _shaps__ joined #gluster
05:05 ccha2 joined #gluster
05:05 ccha2 joined #gluster
05:05 tg2 joined #gluster
05:06 spandit joined #gluster
05:10 sickness_ joined #gluster
05:10 Debloper joined #gluster
05:10 ndevos_ joined #gluster
05:10 ndevos_ joined #gluster
05:12 AaronGr joined #gluster
05:12 sahina joined #gluster
05:13 yosafbridge joined #gluster
05:16 Gorian joined #gluster
05:24 atalur joined #gluster
05:30 pdrakeweb joined #gluster
05:31 lalatenduM joined #gluster
05:38 smohan joined #gluster
05:41 sac joined #gluster
05:44 hchiramm joined #gluster
05:46 anil joined #gluster
05:47 meghanam joined #gluster
05:48 Gorian joined #gluster
05:50 ramteid joined #gluster
05:51 bala joined #gluster
05:56 overclk joined #gluster
05:56 kdhananjay joined #gluster
05:59 an joined #gluster
06:01 Gorian joined #gluster
06:10 an joined #gluster
06:11 glusterbot News from newglusterbugs: [Bug 1175733] [USS]: If the snap name is same as snap-directory than cd to virtual snap directory fails <https://bugzilla.redhat.com/show_bug.cgi?id=1175733>
06:17 kanagaraj joined #gluster
06:18 soumya joined #gluster
06:20 nrcpts joined #gluster
06:20 soumya_ joined #gluster
06:24 jaank joined #gluster
06:30 poornimag joined #gluster
06:37 Paul-C left #gluster
06:37 ndarshan joined #gluster
06:39 anoopcs joined #gluster
06:51 jiffin joined #gluster
06:52 bala joined #gluster
07:00 ppai joined #gluster
07:10 LebedevRI joined #gluster
07:13 ndarshan joined #gluster
07:16 SOLDIERz joined #gluster
07:21 hagarth joined #gluster
07:23 jtux joined #gluster
07:27 lalatenduM_ joined #gluster
07:29 rgustafs joined #gluster
07:45 an joined #gluster
07:52 saurabh joined #gluster
08:01 cultavix joined #gluster
08:03 ndarshan joined #gluster
08:15 fubada purpleidea: cluprit found https://groups.google.com/forum/#!topic/foreman-announce/kZDMSlrannk
08:15 fubada culprit
08:24 hchiramm_ joined #gluster
08:27 fsimonce joined #gluster
08:27 hybrid512 joined #gluster
08:47 hybrid512 joined #gluster
08:47 purpleidea fubada: aha!
08:47 purpleidea fubada: i'll smack some people for you!
08:54 Fen2 joined #gluster
08:55 an joined #gluster
08:55 Fen2 Hi :) We are going to set up a cluster of 6 servers with GlusterFS, which release do you recommand ? :)
08:55 purpleidea fubada: https://github.com/theforeman/foreman/commit/d3b7f426959b7195ceedac1b9e719bfd1563af02
08:55 purpleidea fubada: all fixed in 1.7.1 :)
09:00 Philambdo joined #gluster
09:15 fandi joined #gluster
09:18 sahina joined #gluster
09:23 Fen2 Hi :) We are going to set up a cluster of 6 servers with GlusterFS, which release do you recommand ?
09:24 rgustafs joined #gluster
09:43 deniszh joined #gluster
09:44 sahina joined #gluster
09:44 rgustafs joined #gluster
09:46 lalatenduM_ joined #gluster
09:47 johndescs_ joined #gluster
09:50 soumya joined #gluster
09:52 nbalacha joined #gluster
09:55 harish joined #gluster
10:05 Norky joined #gluster
10:08 Fen2 Hi :) We are going to set up a cluster of 6 servers with GlusterFS, which release do you recommand ?
10:10 kdhananjay1 joined #gluster
10:12 glusterbot News from newglusterbugs: [Bug 1176543] RDMA: GFAPI benchmark segfaults when ran with greater than 2 threads, no segfaults are seen over TCP <https://bugzilla.redhat.com/show_bug.cgi?id=1176543>
10:13 kovshenin joined #gluster
10:18 warci joined #gluster
10:27 kdhananjay joined #gluster
10:28 drankis joined #gluster
10:32 warci hello all, we've upgraded from 3.4 -> 3.5, but now windows clients can't connect through nfs. The reason is: now the server claims it can authenticate kerberos, while it isn't configured
10:32 warci is there some way to disable this authentication through an option or something?
10:34 kdhananjay1 joined #gluster
10:35 ghenry joined #gluster
10:42 Pupeno joined #gluster
10:58 M28_ joined #gluster
10:59 aravindavk joined #gluster
10:59 ppai joined #gluster
10:59 msciciel joined #gluster
11:02 Champi joined #gluster
11:04 [o__o] joined #gluster
11:07 vincent_vdk joined #gluster
11:07 diegows joined #gluster
11:26 hchiramm joined #gluster
11:28 social joined #gluster
11:29 T3 joined #gluster
11:37 hchiramm joined #gluster
11:41 haomaiwa_ joined #gluster
11:51 ppai joined #gluster
12:06 edward1 joined #gluster
12:17 Fen1 joined #gluster
12:18 RameshN joined #gluster
12:34 hagarth joined #gluster
12:42 glusterbot News from newglusterbugs: [Bug 1122807] [enhancement]: Log a checksum of the new client volfile after a graph change. <https://bugzilla.redhat.com/show_bug.cgi?id=1122807>
12:42 glusterbot News from newglusterbugs: [Bug 1175730] [USS]: creating file/directories under .snaps shows wrong error message <https://bugzilla.redhat.com/show_bug.cgi?id=1175730>
12:48 Norky joined #gluster
12:51 lalatenduM_ joined #gluster
12:51 ira joined #gluster
12:52 hchiramm joined #gluster
12:58 Gorian joined #gluster
13:02 _Bryan_ joined #gluster
13:15 jonybravo30_ joined #gluster
13:16 calisto joined #gluster
13:21 Gorian joined #gluster
13:24 B21956 joined #gluster
13:52 anoopcs joined #gluster
13:52 shubhendu joined #gluster
13:53 kshlm joined #gluster
13:58 virusuy joined #gluster
13:59 chirino joined #gluster
14:00 aravindavk joined #gluster
14:02 virusuy joined #gluster
14:05 saltsa joined #gluster
14:15 Fen1 Hi :) We are going to set up a cluster of 6 servers with GlusterFS, which release do you recommand ? 3.4/3.5/3.6 ?
14:16 fandi_ joined #gluster
14:18 l0uis Fen1: There was a thread on gluster-users recently: http://www.gluster.org/pipermail/gluster-users/2014-November/019530.html
14:18 Fen1 l0uis : thx i will read this :)
14:19 l0uis Fen1: summary: JoeJulian recommends 3.5.x.
14:22 Fen1 l0uis : and about performance 3.6 have some improvements or not ?
14:22 Fen1 because we set up (for the moment) a laboratory, not for production
14:24 Fen1 we have 6 servers (i7/16Gb) which are waiting :) i have choosen CentOs/xfs also, is it good ? and there is something better ? :)
14:25 SOLDIERz joined #gluster
14:28 l0uis Fen1: I'm sure 3.6 has improvements but I am not familiar. If you have a lab, I suspect everyone would love for you to test 3.6 :)
14:29 Fen1 ok, so i will ;)
14:29 jaank joined #gluster
14:35 jaank_ joined #gluster
14:35 coredump joined #gluster
14:53 Pupeno joined #gluster
14:59 coredump joined #gluster
15:05 RameshN joined #gluster
15:05 fubada purpleidea: hi
15:06 fubada did the operating-version fix ever make it into the puppet module master?
15:06 fubada I still see that restart every run behavior
15:16 Pupeno_ joined #gluster
15:21 soumya joined #gluster
15:37 elico joined #gluster
15:42 dastar hi all,
15:43 tdasilva joined #gluster
15:46 plarsen joined #gluster
15:52 an joined #gluster
15:53 Sunghost joined #gluster
15:54 lmickh joined #gluster
15:56 Sunghost hello - i have a 2 node distributed on which ne node dies. is there a way to find out which files in which directories are missing and which folders are complete?
15:58 mator Sunghost, ls /exports/brick*
15:59 mator add "recursive" or "find /exports/brick* -type f"
15:59 julim joined #gluster
15:59 Sunghost ok yes shows me the content of the current brick and his files - right?
16:00 Sunghost but i cant access node1 which dies
16:00 Sunghost i thought there is a logic or database on each node which can extract to find missing files in folders -maybe
16:03 DV joined #gluster
16:08 sac joined #gluster
16:14 jonybravo30_ hello!
16:14 glusterbot jonybravo30_: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
16:15 jonybravo30_ how can i change one replica for another one with diferent ip?
16:15 jonybravo30_ two machines are equal
16:17 l0uis Sunghost: There is no central meta data nor central list of all files. Presence on a brick is the only way to know. So if the node is dead, and you can't see the brick, and you don't have an external master list, you can't figure it out (as far as I know).
16:20 Sunghost ok thats what i assumed too - damn - any idea to create such a list in future? perhaps a simple ls -ahlR > list.txt ?
16:22 l0uis that works
16:24 an joined #gluster
16:25 plarsen joined #gluster
16:28 cultav1x joined #gluster
16:30 Sunghost Mh ok any better way else to run eachtime this slow running commands? perhaps some delta function?
16:34 purpleidea fubada: git master, yes... can you look through your server logs and look for any warnings?
16:34 mator Sunghost, i think you better implement replicated volume instead of distributed
16:36 Sunghost mator: the price for disks is far higher than for raid6 on each node and this time it was my fault and its stupid but i will survive it
16:38 fubada purpleidea: not sure if you saw but the @interfaces issue was caused by a bug in foreman 1.7.0 and fixed in 1.7.1
16:38 purpleidea fubada: i did
16:39 purpleidea fubada: i sent you a commit :)
16:39 fubada thanks for helping me out with that
16:39 purpleidea yw
16:39 jaank joined #gluster
16:40 mator Sunghost, http://people.redhat.com/ndevos/talks/Gluster-data-distribution_20120218.pdf , the last pages of it talks about algorithms used (davies-meyer, elastic hash) and DHT xlator
16:43 sputnik13 joined #gluster
16:46 Sunghost thx - will read that - best x-mas
16:48 tetreis joined #gluster
16:48 an joined #gluster
16:50 vimal joined #gluster
16:50 sputnik13 joined #gluster
17:02 David_H_Smith joined #gluster
17:08 fubada purpleidea: is there some weirdness around adding new volumes to existing bricks with the module?
17:08 fubada i cant get it to create a new volume
17:08 fubada just ignnores the manifest
17:09 fubada i see it makes a /var/lib/puppet/tmp/gluster/volume/create-puppetfileserver.sh
17:09 fubada but doesnt actually execute
17:10 calisto joined #gluster
17:10 purpleidea fubada: one thing at a time. what about the service refresh first?
17:12 fubada purpleidea: one minute let me circle back to that. brb
17:14 sauce joined #gluster
17:14 sauce joined #gluster
17:16 sauce joined #gluster
17:16 sauce joined #gluster
17:25 an joined #gluster
17:26 M28 joined #gluster
17:34 fubada purpleidea: https://gist.github.com/aamerik/02bd5d44716c83d18581
17:34 fubada here is the agent run output
17:35 fubada this happens every agent run
17:35 fubada which looks similar to the versions issue
17:41 purpleidea fubada:
17:41 purpleidea 11:39 < purpleidea> fubada: git master, yes... can you look through your server  logs and look for any warnings?
17:43 semiosis ,,(ubuntu)
17:43 semiosis glusterbot: ping
17:43 glusterbot semiosis is gearing up to improve the ubuntu (and debian) packages. if you have an interest in glusterfs packages for ubuntu please ping semiosis. if you have an issue or bug to report regarding the ubuntu packages (even if you've already told semiosis about it) please open an issue on github: https://github.com/semiosis/glusterfs-debian
17:43 glusterbot pong
17:44 semiosis furthermore, packages for precise & utopic for 3.4.6, 3.5.3, and 3.6.1, were published to the ,,(ppa) over the weekend
17:44 glusterbot The official glusterfs packages for Ubuntu are available here: 3.4: http://goo.gl/M9CXF8 3.5: http://goo.gl/6HBwKh 3.6: http://goo.gl/XyYImN -- See more PPAs for QEMU with GlusterFS support, and GlusterFS QA releases at https://launchpad.net/~gluster -- contact semiosis with feedback
17:44 semiosis trusty packages were already there
17:46 * l0uis pings semiosis
17:47 l0uis no issues to report, but we use the ubuntu packages in production. :)
17:47 semiosis great
17:47 semiosis what distro release?
17:47 l0uis precise
17:47 l0uis the only bit of feedback i have right now is the download link. i need to switch off your ppa to the gluster ppa
17:48 semiosis yes, thanks for that, we fixed it over the weekend
17:48 l0uis cool
17:48 lalatenduM joined #gluster
17:50 an joined #gluster
17:53 hagarth semiosis: maybe we could drop a note on the MLs too about the new ppa?
17:54 semiosis hagarth: yes, good idea.  i'll send an email today to -users & -devel
17:54 wgao joined #gluster
17:54 hagarth cool, semiosis++
17:54 glusterbot hagarth: semiosis's karma is now 2000008
18:09 an joined #gluster
18:13 jonybravo30_ joined #gluster
18:17 saltsa joined #gluster
18:30 LebedevRI joined #gluster
18:47 calisto joined #gluster
18:53 sputnik13 joined #gluster
18:59 msmith joined #gluster
18:59 sputnik13 joined #gluster
18:59 jaank joined #gluster
19:05 sputnik13 joined #gluster
19:20 danku joined #gluster
19:22 Philambdo joined #gluster
19:22 doekia joined #gluster
19:26 danku Hello I think I just screwed some stuff. I have a two server replicated setup. And one of the servers had hardware failures. I ended up getting the server up with new hardware and same OS HDD. But lost the /brick data. Once I got it online and ran heal all. It moved all the data on the good das to .glusterfs DIR
19:26 danku My question is. Is there a method to revert the data from .glusterfs?
19:29 Philambdo joined #gluster
19:34 JoeJulian @lucky what is this .glusterfs directory
19:34 glusterbot JoeJulian: http://joejulian.name/blog/what-is-this-new-glusterfs-directory-in-33/
19:37 sputnik13 joined #gluster
19:40 danku @JoeJulian Thanks
19:54 partner_ my "no space left on device" issue turned out to be some inode issue even thought OS reported plenty of free, remount with inode64 fixed it
19:56 shubhendu joined #gluster
19:57 JoeJulian interesting. I'll remember that.
19:58 partner_ pretty much identical case to this from almost year back: http://comments.gmane.org/gmane.comp.file-systems.gluster.user/14356
20:00 partner_ here's the explanation as the thread is looong: http://permalink.gmane.org/gmane.comp.file-systems.gluster.user/14686
20:05 partner_ roughly 5 TB replica 2 volume with some 20M inodes in use.. i'm a bit scared when the big volume will explode as this gives no warning ahead
20:06 jaank joined #gluster
20:16 T3 joined #gluster
20:20 M28_ joined #gluster
20:24 semiosis partner_: fyi...
20:24 semiosis ,,(ubuntu)
20:24 glusterbot semiosis is gearing up to improve the ubuntu (and debian) packages. if you have an interest in glusterfs packages for ubuntu please ping semiosis. if you have an issue or bug to report regarding the ubuntu packages (even if you've already told semiosis about it) please open an issue on github: https://github.com/semiosis/glusterfs-debian
20:25 semiosis ,,(ppa) -- new packages for precise & unity of 3.4.6, 3.5.3, and 3.6.1
20:25 glusterbot The official glusterfs packages for Ubuntu are available here: 3.4: http://goo.gl/M9CXF8 3.5: http://goo.gl/6HBwKh 3.6: http://goo.gl/XyYImN -- See more PPAs for QEMU with GlusterFS support, and GlusterFS QA releases at https://launchpad.net/~gluster -- contact semiosis with feedback
20:25 semiosis glusterbot: what?  you're not going to decrement  's karma?
20:35 sputnik13 joined #gluster
20:45 calisto joined #gluster
20:49 partner_ semiosis: grrreat!
20:49 partner_ let me know if i can be of any assistance
21:04 cyberbootje joined #gluster
21:15 partner_ no build for wheezy for 3.4.6, empty dir on download
21:25 an joined #gluster
21:27 MattJ_EC joined #gluster
21:30 semiosis ah, ok
21:30 semiosis and thanks for commenting on the logrotate issue (https://github.com/semiosis/glusterfs-debian/issues/1#issuecomment-67892164)
21:30 partner_ i was just about to write an issue, do you want me to proceed with it or?
21:31 partner_ np, i want it and i'm sure many others will benefit a lot from it, too
21:31 partner_ its obviously broken so why not include a fixed one
21:31 partner_ took awhile to find my account to github, haven't been around too much lately :)
21:32 semiosis agreed
21:32 semiosis what were you going to create an issue about?  the 3.4.6 wheezy package?
21:33 partner_ yeah, i kind of thought you want them all there from above bot comments
21:34 partner_ i have no complaints about the packages other than the logrotate at this point, they work fine, thanks
21:36 partner_ and the dsc + source files are important too and they are there available, big thanks for that, too, helps greatly anybody that has to build it for new/older/different system at hand
21:40 jaank_ joined #gluster
21:48 MattJ_EC Quick question - I've got a problem with my webserver cluster that all have several Gluster volumes mounted on them. Whatever server I run our cron tasks on (there's quite a few of them) quickly climbs to 100% I/O usage by Gluster processes then maxes out CPU with IO wait time. Is there any easy way to identify what sort of traffic is causing this? Iotop shows no processes using significant I/O other than gluster itself.
21:48 MattJ_EC (and jbd2 as well, but I think that's normal)
21:55 elico joined #gluster
21:56 Pupeno joined #gluster
21:56 Pupeno joined #gluster
21:56 coredump joined #gluster
22:07 badone joined #gluster
22:11 calisto joined #gluster
22:22 badone joined #gluster
22:29 diegows joined #gluster
22:41 B21956 joined #gluster
23:17 jaank joined #gluster
23:21 B21956 joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary