Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-12-09

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:00 kmai007 b/c its not an x which would be a family like 3.4.x = 2
00:01 kmai007 wow just a month away from freenode and i'm like a decade behind
00:05 B21956 joined #gluster
00:14 JoeJulian It was changed mid-release. It was a needed change in order to fix a bug.
00:14 kmai007 https://bugzilla.redhat.com/show_bug.cgi?id=1168897 i suppose I may have a syntax error, but this cluster.op-version is not known
00:14 glusterbot Bug 1168897: medium, medium, ---, bugs, NEW , Attempt remove-brick after node has terminated in cluster gives error: volume remove-brick commit force: failed: One or more nodes do not support the required op-version. Cluster op-version must atleast be 30600.
00:18 kmai007 i'll guess i'll document a new bug
00:19 JoeJulian Are you running 3.6?
00:19 elyograg anyone see my question earlier?
00:20 JoeJulian Nope, I missed it. You can stop the volume.
00:21 JoeJulian If the two servers you're trying to remove are part of the same peer group, that'll be a problem if you need to change any settings in the volume that remains running.
00:21 JoeJulian If they're separate peer groups, you'll be fine.
00:22 elyograg they're in the same peer listing.
00:22 elyograg only have one of those.
00:23 bennyturns joined #gluster
00:24 elyograg basically, I'd like to decommission the volume and the servers that it's on, but leave them in the rack unpowered, and retain the ability to recover and remount the volume if it becomes necessary.
00:26 elyograg Each server has eight bricks on it, so it's not as simple as just rifling through a brick.
00:26 JoeJulian You would have to stop and delete the volume. peer detach those two servers. Create a new peer group with *just* those two servers, re-create the volume exactly as it was (you'll need to follow the ,,(path or prefix) instructions).
00:26 glusterbot http://joejulian.name/blog/glusterfs-path-or-a-prefix-of-it-is-already-part-of-a-volume/
00:27 JoeJulian Then you can shut it down, and bring it back up as necessary.
00:28 kmai007 https://bugzilla.redhat.com/show_bug.cgi?id=1171921
00:28 glusterbot Bug 1171921: unspecified, unspecified, ---, bugs, NEW , gluster volume set all cluster.op-version is not recognized
00:28 kmai007 currently testing out 3.5.3-1
00:29 JoeJulian So, kmai007, you can't set op-version to anything higher than 30503
00:29 glusterbot News from newglusterbugs: [Bug 1171921] gluster volume set all cluster.op-version is not recognized <https://bugzilla.redhat.com/show_bug.cgi?id=1171921>
00:29 glusterbot News from newglusterbugs: [Bug 1168897] Attempt remove-brick after node has terminated in cluster gives error: volume remove-brick commit force: failed: One or more nodes do not support the required op-version. Cluster op-version must atleast be 30600. <https://bugzilla.redhat.com/show_bug.cgi?id=1168897>
00:30 kmai007 i cannot set anything in the cli
00:30 JoeJulian Because 3.5.3 has no code to perform some of the 3.6 remote procedures.
00:30 kmai007 dang, i guess which glusterfs version should I implement in my environment
00:31 kmai007 i took 3.5.3 stable,
00:31 kmai007 to test with
00:31 JoeJulian That's the one I would choose right now.
00:31 kmai007 its not a show stopper
00:31 JoeJulian Though 3.6 has been surprisingly quiet.
00:31 kmai007 i am not mucking with it, but I thought I'd try it though the CLI
00:35 kmai007 dang it, i take that back about quotas on 3.5.3, its not working at all
00:36 kmai007 i created a new volume, and its not reflecting what i want as a limit
00:36 kmai007 how was 3.5.2 ?
00:36 JoeJulian there were critical bugs in 3.5.3.
00:37 JoeJulian What's the log say. Why is it failing?
00:37 _pol_ joined #gluster
00:39 kmai007 client logs show no errors
00:39 kmai007 its mounted, but doing a df-h shows the full brick and not my quota
00:41 kmai007 in 1 of my bricks, i do see the quota option as off
00:41 kmai007 in the /brick/test.log
00:41 kmai007 http://fpaste.org/157814/85703141/
00:43 kmai007 Options Reconfigured:
00:43 kmai007 features.quota: on
00:43 kmai007 but nothing about the size of / like i'm used to seeing when setting the quota
00:44 elyograg kmai007: bug 1031817 may be applicable here.
00:44 glusterbot Bug https://bugzilla.redhat.com:443/show_bug.cgi?id=1031817 unspecified, unspecified, ---, bugs, NEW , Setting a quota for the root of a volume changes the reported volume size
00:44 elyograg one of mine. :)
00:45 kmai007 elyograg: thanks, i started on 3.4.2, and it was fine
00:45 elyograg I consider that behavior a big.
00:45 elyograg bug.
00:45 kmai007 its only after i've upgraded to 3.5.3 that all my quotas are now gone from gluster volume info
00:47 kmai007 i can list the quota from cli, quota list, but its not displayed in volume info like so http://fpaste.org/157815/41808601/
00:48 kmai007 well tomorrow is a nother day, imma go brainstorm some more,
01:07 bala joined #gluster
01:09 kmai007 anybody there?
01:10 kmai007 i lied, i didn't go home
01:10 JoeJulian only partly here.
01:10 kmai007 when would the storage want to mount up itself?
01:10 kmai007 to a particular volume name
01:10 kmai007 a df shows this is mounted on the storage
01:10 kmai007 localhost:test3       1.0T  170G  854G  17% /var/run/gluster/test3
01:11 kmai007 its empty inside
01:11 kmai007 there is a pid against it when i run lsof
01:12 kmai007 http://fpaste.org/157819/08752414/ looks like it belongs to the glusterfs process
01:12 JoeJulian rebalance, self-heal...
01:12 kmai007 nasty on a brand new volume
01:12 JoeJulian maybe even quota management, not sure...
01:13 kmai007 this R&D gluster is f'd up
01:13 kmai007 i was trying to avoid a rebuild
01:13 JoeJulian Yep, looks like quota management. Did you look in /var/log/glusterfs/quota-mount-test3.log
01:14 kmai007 yep, nothing abnormal http://fpaste.org/157821/80876361/
01:16 itisravi joined #gluster
01:16 kmai007 i fscky client-0 thinking it had a bad inode
01:17 kmai007 but no changes
01:18 kmai007 if i kill that pid, then the local:host mount disappears
01:19 kmai007 i'll check bugzilla
01:19 kmai007 looks like this issue was already discovered by some other poor fellow
01:31 kmai007 joined #gluster
01:35 tdasilva joined #gluster
01:45 lpabon joined #gluster
01:48 marcoceppi joined #gluster
02:08 kdhananjay joined #gluster
02:08 haomaiwa_ joined #gluster
02:13 itisravi joined #gluster
02:37 _pol joined #gluster
02:45 kmai007 joined #gluster
02:45 kdhananjay joined #gluster
02:56 itisravi joined #gluster
02:58 coredump joined #gluster
03:07 meghanam joined #gluster
03:19 _pol joined #gluster
03:31 calisto joined #gluster
03:40 kdhananjay joined #gluster
03:43 harish joined #gluster
03:46 bharata-rao joined #gluster
03:51 kanagaraj joined #gluster
03:53 calisto joined #gluster
03:55 jaank joined #gluster
04:01 RameshN joined #gluster
04:03 spandit joined #gluster
04:05 atinmu joined #gluster
04:09 itisravi joined #gluster
04:16 ndarshan joined #gluster
04:16 spandit joined #gluster
04:20 kshlm joined #gluster
04:23 bala joined #gluster
04:29 hagarth joined #gluster
04:35 anoopcs joined #gluster
04:37 jiffin joined #gluster
04:37 sahina joined #gluster
04:41 soumya_ joined #gluster
04:41 nbalacha joined #gluster
04:45 rafi1 joined #gluster
04:57 ppai joined #gluster
05:12 zerick joined #gluster
05:20 jbrooks joined #gluster
05:28 anil joined #gluster
05:30 glusterbot News from newglusterbugs: [Bug 1171954] [RFE] Rebalance Performance Improvements <https://bugzilla.redhat.com/show_bug.cgi?id=1171954>
05:30 jbrooks joined #gluster
05:32 poornimag joined #gluster
05:36 kdhananjay joined #gluster
05:37 atalur joined #gluster
05:51 ramteid joined #gluster
05:53 maveric_amitc_ joined #gluster
06:01 sac_ joined #gluster
06:09 kshlm joined #gluster
06:17 soumya__ joined #gluster
06:21 codex joined #gluster
06:28 saurabh joined #gluster
06:34 meghanam joined #gluster
06:36 sahina joined #gluster
06:40 jiffin joined #gluster
06:50 spandit joined #gluster
06:50 sahina joined #gluster
07:00 ctria joined #gluster
07:02 hagarth joined #gluster
07:03 nshaikh joined #gluster
07:15 atinmu joined #gluster
07:19 jtux joined #gluster
07:24 rgustafs joined #gluster
07:30 glusterbot News from newglusterbugs: [Bug 1093692] Resource/Memory leak issues reported by Coverity. <https://bugzilla.redhat.com/show_bug.cgi?id=1093692>
07:36 jbrooks joined #gluster
08:03 spandit joined #gluster
08:03 nbalacha joined #gluster
08:21 rjoseph joined #gluster
08:23 ghenry joined #gluster
08:23 ghenry joined #gluster
08:25 fsimonce joined #gluster
08:29 lalatenduM joined #gluster
08:31 glusterbot News from resolvedglusterbugs: [Bug 1115199] Unable to get lock for uuid,  Cluster lock not held <https://bugzilla.redhat.com/show_bug.cgi?id=1115199>
08:31 atalur joined #gluster
08:31 LebedevRI joined #gluster
08:34 overclk joined #gluster
08:42 hagarth joined #gluster
08:44 rjoseph joined #gluster
08:49 atalur joined #gluster
08:56 ricky-ticky joined #gluster
08:57 morse joined #gluster
09:05 kovshenin joined #gluster
09:07 sahina joined #gluster
09:08 bala joined #gluster
09:09 Anuradha joined #gluster
09:12 [Enrico] joined #gluster
09:12 [Enrico] joined #gluster
09:15 ninkotech joined #gluster
09:17 Philambdo joined #gluster
09:18 LebedevRI joined #gluster
09:25 karnan joined #gluster
09:25 Slashman joined #gluster
09:30 kaushal_ joined #gluster
09:34 gildub joined #gluster
09:35 kshlm joined #gluster
09:39 sahina joined #gluster
09:46 rjoseph joined #gluster
09:47 atinmu joined #gluster
09:53 bala joined #gluster
09:55 soumya__ joined #gluster
09:56 ppai joined #gluster
10:13 kaushal_ joined #gluster
10:29 nshaikh joined #gluster
10:31 glusterbot News from newglusterbugs: [Bug 1172058] push-pem does not distribute common_secret.pem.pub <https://bugzilla.redhat.com/show_bug.cgi?id=1172058>
10:33 elico joined #gluster
10:42 social joined #gluster
10:45 kshlm joined #gluster
10:53 ctria joined #gluster
10:58 jiffin1 joined #gluster
11:01 rjoseph joined #gluster
11:01 atinmu joined #gluster
11:04 hagarth joined #gluster
11:06 anil joined #gluster
11:06 sahina joined #gluster
11:07 rgustafs joined #gluster
11:07 bjornar joined #gluster
11:20 T3 joined #gluster
11:22 vikumar joined #gluster
11:38 verdurin joined #gluster
11:42 nshaikh joined #gluster
11:43 maveric_amitc_ joined #gluster
11:44 ndevos REMINDER: Gluster Community Bug Triage meeting starts in 15 minutes in #gluster-meeting
11:48 calum_ joined #gluster
11:57 lpabon joined #gluster
12:07 chirino joined #gluster
12:14 tdasilva joined #gluster
12:21 nshaikh joined #gluster
12:26 RameshN joined #gluster
12:27 kovshenin joined #gluster
12:36 itisravi_ joined #gluster
12:37 kshlm joined #gluster
12:37 kshlm joined #gluster
12:43 kaii joined #gluster
12:43 kaii Hi
12:43 glusterbot kaii: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
12:46 kaii Why is it that "gluster volume heal gv0 info" shows unsynced entries on a localhost installation? (client and server on the same host)? i want to write some sort of health check, that triggers a notification when one of the bricks is behind. it seems to me that "volume heal <vol> info" is not the right command to obtain that information, but i'm unsure what to use else ..
12:47 kaii with my current solution, localhost is "out of sync" when i write a lot of files .. this is not the intended behaviour of my health check .. any suggestions?
12:48 kanagaraj joined #gluster
12:51 ctria joined #gluster
12:52 ctrianta joined #gluster
12:52 ctria joined #gluster
12:54 feeshon joined #gluster
12:56 itisravi joined #gluster
13:03 anoopcs joined #gluster
13:06 bennyturns joined #gluster
13:14 rgustafs joined #gluster
13:19 bene2 joined #gluster
13:32 glusterbot News from newglusterbugs: [Bug 1170075] [RFE] : BitRot detection in glusterfs <https://bugzilla.redhat.com/show_bug.cgi?id=1170075>
13:32 chirino joined #gluster
13:37 grayeul joined #gluster
13:38 calisto joined #gluster
13:39 grayeul does anyone have any ideas what it might mean if gluster peer status from one end shows connected, but shows diconnected from the other end?
13:46 calisto joined #gluster
13:50 RameshN joined #gluster
13:55 nbalacha joined #gluster
13:58 julim joined #gluster
13:59 bala joined #gluster
13:59 fsimonce joined #gluster
14:00 nbalacha joined #gluster
14:02 glusterbot News from resolvedglusterbugs: [Bug 1163709] gstatus: If a volume is mounted more than once from a machine, it is still considered as a single client <https://bugzilla.redhat.com/show_bug.cgi?id=1163709>
14:02 _pol joined #gluster
14:06 calum_ joined #gluster
14:06 _pol_ joined #gluster
14:08 virusuy joined #gluster
14:17 diegows joined #gluster
14:22 lalatenduM joined #gluster
14:30 dgandhi joined #gluster
14:33 rafi1 joined #gluster
14:44 ctria joined #gluster
14:47 edward1 joined #gluster
14:57 edward1 left #gluster
14:57 dberry joined #gluster
14:57 dberry joined #gluster
15:00 Pupeno joined #gluster
15:09 calisto joined #gluster
15:17 shaunm joined #gluster
15:17 shaunix left #gluster
15:21 anil joined #gluster
15:25 RameshN joined #gluster
15:26 bennyturns joined #gluster
15:28 bene2 joined #gluster
15:32 B21956 joined #gluster
15:43 plarsen joined #gluster
15:47 poornimag joined #gluster
15:49 jobewan joined #gluster
15:49 RameshN joined #gluster
15:53 jbrooks joined #gluster
15:59 _pol joined #gluster
16:00 tdasilva joined #gluster
16:05 poornimag joined #gluster
16:07 sac_ joined #gluster
16:12 P0w3r3d joined #gluster
16:14 nbalacha joined #gluster
16:16 P0w3r3d joined #gluster
16:26 meghanam joined #gluster
16:26 Slasheri joined #gluster
16:30 calisto joined #gluster
16:31 hagarth joined #gluster
16:38 harish joined #gluster
16:39 dgandhi when I use fuse.glusterfs it default mounts relatime, does this mean that atime will somehow be written to the brick daily, or that the brick will update atime on the least of the brick/volume mount options ?
16:43 lmickh joined #gluster
16:49 vimal joined #gluster
16:53 ndevos .wub ka
17:02 virusuy joined #gluster
17:02 virusuy joined #gluster
17:02 zerick joined #gluster
17:05 rafi1 joined #gluster
17:12 elyograg left #gluster
17:27 elico joined #gluster
17:31 TrDS joined #gluster
17:34 kalzz joined #gluster
17:36 calisto joined #gluster
17:41 Slasheri joined #gluster
17:41 gstock_ joined #gluster
17:48 PeterA joined #gluster
18:03 jaank joined #gluster
18:03 glusterbot News from newglusterbugs: [Bug 1172262] glusterfs client crashed while migrating the fds <https://bugzilla.redhat.com/show_bug.cgi?id=1172262>
18:11 jbrooks joined #gluster
18:12 RameshN joined #gluster
18:13 jbrooks joined #gluster
18:14 y4m4 joined #gluster
18:18 anoopcs joined #gluster
18:40 RameshN joined #gluster
18:41 portante left #gluster
18:41 rotbeard joined #gluster
18:55 diegows joined #gluster
18:56 khelll joined #gluster
18:56 khelll hey
18:59 khelll I have a common use case, I want a file system that can handle static files for web project. Usually I use s3, but this time we have very big bandwidth around 15TB.
19:00 khelll so it's costive. I read in several sources that GlusterFS is not a good fit for static small files
19:02 n-st joined #gluster
19:06 chirino joined #gluster
19:09 ricky-ticky1 joined #gluster
19:18 diegows joined #gluster
19:20 Philambdo joined #gluster
19:23 RameshN joined #gluster
19:24 TehStig joined #gluster
19:30 JoeJulian @php | khelll
19:32 Intensity joined #gluster
19:32 Intensity joined #gluster
19:42 semiosis ~php | khelll
19:42 glusterbot khelll: (#1) php calls the stat() system call for every include. This triggers a self-heal check which makes most php software slow as they include hundreds of small files. See http://joejulian.name/blog/optimizing-web-performance-with-glusterfs/ for details., or (#2) It could also be worth mounting fuse with glusterfs --attribute-timeout=HIGH --entry-timeout=HIGH --negative-timeout=HIGH --fopen-keep-cache
19:43 khelll @semiosis  i'm not using php, i want this to serve the images on a web page, but it seems the same, right?
19:44 khelll @semiosis think of it as photo hosting service
19:45 JoeJulian khelll: Most of the theories hold true.
19:45 semiosis they're static files?  put a cache in front, like varnish
19:46 JoeJulian pfft... what would semiosis know about photo hosting... ;)
19:46 khelll @semiosis Yes i'm considering this of course,
19:46 semiosis hey, i own a camera
19:47 khelll good for you, i own my mobile camera
19:47 JoeJulian http://www.picturemarketing.com/ <-- semiosis
19:47 semiosis oh, i also run a large photo hosting site
19:48 glusterbot JoeJulian: <'s karma is now -7
19:48 JoeJulian hehe
19:48 semiosis i guess large is relative
19:48 JoeJulian <++ <++ <++
19:48 glusterbot JoeJulian: <'s karma is now -6
19:48 glusterbot JoeJulian: <'s karma is now -5
19:48 glusterbot JoeJulian: <'s karma is now -4
19:53 bene3 joined #gluster
20:01 gildub joined #gluster
20:08 _dist joined #gluster
20:12 Philambdo joined #gluster
20:13 chirino joined #gluster
20:14 _pol_ joined #gluster
20:15 lpabon joined #gluster
20:32 elico joined #gluster
21:00 calum_ joined #gluster
21:01 badone joined #gluster
21:02 _pol_ joined #gluster
21:11 kmai007 joined #gluster
21:11 chirino joined #gluster
21:27 badone joined #gluster
21:39 badone joined #gluster
21:42 kmai007 bugga boo https://bugzilla.redhat.com/show_bug.cgi?id=1172348
21:42 glusterbot Bug 1172348: medium, unspecified, ---, bugs, NEW , new installation of glusterfs3.5.3-1; quota not displayed to client.
21:46 kmai007 i'm sorry, looks like in 3.5.3 i had to enable a new quota feature i found in https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3/html/Administration_Guide/sect-Displaying_Quota_Limit_Information.html
21:46 kmai007 enabling this now, my clients now reflect the appropriate df reports
21:47 kmai007 which I didn't ever have to do in 3.4.3
21:47 kmai007 :-(
21:59 _pol joined #gluster
22:05 glusterbot News from newglusterbugs: [Bug 1172348] new installation of glusterfs3.5.3-1; quota not displayed to client. <https://bugzilla.redhat.com/show_bug.cgi?id=1172348>
22:08 yoavz joined #gluster
22:14 T3 joined #gluster
22:20 B21956 joined #gluster
22:28 deniszh joined #gluster
22:35 Gilbs joined #gluster
22:37 Gilbs Are there any issues running 3.6.1 gluster servers and 3.5 clients?
22:40 vimal joined #gluster
22:41 Gilbs left #gluster
22:43 doo joined #gluster
22:46 vimal joined #gluster
22:48 Philambdo joined #gluster
23:01 JordanHackworth joined #gluster
23:09 kmai007 if there are issues, i was told something about updating op-version
23:09 kmai007 @op-version
23:09 glusterbot kmai007: The operating version represents the RPC and translator capabilities required to accommodate the volume settings ( http://gluster.org/community/documentation/index.php/OperatingVersions ). To allow older version clients to connect to newer servers, reset any volume options that require the newer op-version.
23:11 Micromus joined #gluster
23:12 Micromus Which OS is prefered for gluster? Is debian7 fine?
23:17 ivok joined #gluster
23:21 JoeJulian TempleOS
23:23 JoeJulian Micromus: any distro is fine. We do prefer you use our upstream repo over any that happen to be packaged with the distro. Distro's don't keep up on bugfixes and releases - often locking in to a broken version to maintain a "stable" distro.
23:32 Micromus Ofc, I always go for a distro that is supported by an official repo, especially for debian, as debian stable packages are updated once a century
23:33 Micromus Is there any advantages to running on centos/rhel rather than ubuntu/debian? As it is kind of a RHEL project..?
23:36 JoeJulian Not really. We try and build for everybody.
23:37 Micromus Ok, good stuffs!
23:38 JoeJulian Picking Linux distros is like catching fish. There are plenty of them, most stink a little in one way or another, but they're all better than going to the office.
23:42 Micromus :P
23:43 Micromus I'm used to the smell of debian, but acknowledge that some tools (I've been toying with ambari and hadoop the last week) are only supported on centos/rhel at the moment
23:55 M28 joined #gluster
23:56 TrDS left #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary