Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-02-01

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:02 wkf joined #gluster
00:18 sputnik13 joined #gluster
00:21 T3 joined #gluster
00:25 sputnik13 joined #gluster
00:47 T3 joined #gluster
01:01 harish joined #gluster
01:18 jmarley joined #gluster
01:18 harish joined #gluster
01:45 ilbot3 joined #gluster
01:45 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
01:52 lkoranda joined #gluster
01:52 JamesG joined #gluster
01:52 Gugge joined #gluster
01:52 afics joined #gluster
01:52 wkf joined #gluster
01:52 hybrid512 joined #gluster
01:52 lkoranda joined #gluster
01:52 fubada joined #gluster
01:55 Pupeno joined #gluster
02:01 tetreis joined #gluster
02:23 harish joined #gluster
02:36 wkf joined #gluster
02:46 harish joined #gluster
02:48 ilbot3 joined #gluster
02:48 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
02:50 Pupeno_ joined #gluster
02:58 wkf joined #gluster
02:59 bala joined #gluster
03:06 wkf joined #gluster
03:12 Gill joined #gluster
03:17 harish joined #gluster
03:27 wkf joined #gluster
03:34 Pupeno joined #gluster
03:53 T3 joined #gluster
03:55 Pupeno joined #gluster
04:06 T3 joined #gluster
04:35 Pupeno joined #gluster
04:35 Pupeno joined #gluster
05:06 meghanam joined #gluster
05:12 Pupeno_ joined #gluster
05:18 elico joined #gluster
05:48 rjoseph joined #gluster
06:07 osiekhan2 joined #gluster
06:17 sputnik13 joined #gluster
06:23 T3 joined #gluster
06:28 avati joined #gluster
07:40 rcampbel3 joined #gluster
08:03 soumya_ joined #gluster
08:12 redbeard joined #gluster
08:29 LebedevRI joined #gluster
08:37 AaronGr joined #gluster
08:37 zerick_ joined #gluster
08:38 sharknar1o joined #gluster
08:38 lanning_ joined #gluster
08:39 glusterbot` joined #gluster
08:39 _br_- joined #gluster
08:39 squizzi_ joined #gluster
08:39 purpleid1a joined #gluster
08:40 partner_ joined #gluster
08:40 dockbram_ joined #gluster
08:40 Dave2_ joined #gluster
08:40 mikemol joined #gluster
08:40 soumya__ joined #gluster
08:42 ndevos joined #gluster
08:42 ndevos joined #gluster
08:43 klaas_ joined #gluster
08:43 kovshenin joined #gluster
08:44 sac`away` joined #gluster
08:48 doekia joined #gluster
08:48 al joined #gluster
08:54 sputnik13 joined #gluster
09:04 T0aD joined #gluster
09:19 elico joined #gluster
09:19 LebedevRI joined #gluster
09:27 sputnik13 joined #gluster
09:32 sputnik13 joined #gluster
10:15 dataio joined #gluster
10:30 rafi joined #gluster
10:37 sputnik13 joined #gluster
10:45 nangthang joined #gluster
10:46 aravindavk joined #gluster
10:51 nbalacha joined #gluster
10:57 glusterbot News from newglusterbugs: [Bug 1155181] Lots of compilation warnings on OSX.  We should probably fix them. <https://bugzilla.redhat.co​m/show_bug.cgi?id=1155181>
11:04 meghanam joined #gluster
11:11 aravindavk joined #gluster
11:20 LebedevRI joined #gluster
11:41 jvandewege joined #gluster
13:20 tanuck joined #gluster
13:40 capri joined #gluster
13:53 Gill joined #gluster
13:56 nbalacha joined #gluster
14:11 verboese|sleep joined #gluster
14:11 Pupeno joined #gluster
14:54 bala joined #gluster
15:04 kovshenin joined #gluster
15:24 jaank joined #gluster
15:42 Pupeno joined #gluster
15:48 GiLgAmEz1 joined #gluster
15:50 GiLgAmEz1 hi to all! i'm having a straing issue in a gluster 3.1 server with 4 bricks. (4 local scsci disc). I can see and access the files in the brick path but I can't access them by NFS.
15:51 GiLgAmEz1 I tried with gluster "volume rebalance <volume> start" but the problem persists.
15:52 GiLgAmEz1 now i'm running a find exec with a stat for every file.
15:53 GiLgAmEz1 should this command help?
15:57 * ndevos does not have any idea about 3.1 and suggests to upgrade; 3.4 is the oldest version that gets bug fixes
15:58 meghanam joined #gluster
16:01 _Bryan_ joined #gluster
16:04 GiLgAmEz1 ndevos: I should... but i'm the new manager of one old rocks cluster. And the server with glusterfs is running debian 4.
16:04 GiLgAmEz1 ndevos: :(
16:05 GiLgAmEz1 ndevos: i'm planning migrate to a new server. but in the meantime... I should solve this...
16:06 ndevos GiLgAmEz1: I can only suggest to send an email to the gluster-users@gluster.org list, maybe someone remembers how to use 3.1
16:07 GiLgAmEz1 ndevos: thanks :)
16:07 ndevos Good luck, GiLgAmEz1 !
16:07 GiLgAmEz1 ndevos: :)
16:26 jaank joined #gluster
16:29 T3 joined #gluster
16:38 Pupeno_ joined #gluster
16:47 tanuck joined #gluster
16:47 mbukatov joined #gluster
16:48 tanuck joined #gluster
17:20 sputnik13 joined #gluster
17:23 kanagaraj joined #gluster
17:56 n-st joined #gluster
18:05 jaank joined #gluster
18:08 GiLgAmEzH joined #gluster
18:12 ralala joined #gluster
18:24 ricky-ticky joined #gluster
18:34 redbeard joined #gluster
18:52 Cenbe joined #gluster
18:52 Pupeno joined #gluster
18:52 Pupeno joined #gluster
18:58 xavih joined #gluster
19:06 chirino joined #gluster
19:21 Gill Hey guys… quick question on book im getting “mount failed please check the log file for more details” Is there anyway to skip it so the server will boot and I can figure out how to fix the issue? I can’t get into the OS right now. Its Debian
19:34 Pupeno_ joined #gluster
19:36 partner_ Gill: boot into rescue mode ?
19:36 Gill hey partner_ how goes it?
19:37 Gill I was hoping there was a diff way but sounds good! Thanks partner_! :)
19:37 Gill partner_: ++
19:37 glusterbot Gill: partner_'s karma is now 1
19:40 partner Gill: so what is the issue, you have /etc/fstab set up wrong?
19:40 partner once you get the "boot:" shown type in rescue
19:40 Gill I changed my network config probably have an issue there
19:40 partner does it offer you any sort of shell ?
19:40 Gill i went into rescue mode waiting for it to start fully
19:41 partner the system won't advance from that state further, fix the issues and reboot the box
19:41 Gill cant get into CLI
19:42 Gill give root password or main or ctrl D to continue
19:42 Gill neither work
19:43 partner it will give you root shell if you type the password correctly
19:43 Gill ok ill try again
19:44 Pupeno joined #gluster
19:44 partner i'm not sure if there is glusterfs somehow related to the case, #debian is probably more appropriate channel for general issue with the system
19:45 Gill yea its not a glusterfs issue i was just wondering if there was a way to skip the moount failing and let the system boot
19:45 wkf joined #gluster
19:46 partner its often safer not to let system boot up without all of its disks as the system just cannot know how important they were, better half and wait for the admin to fix the issue
19:46 partner which is why every new system should be rebooted before taking into production, to prevent this from happeining after regular maintenance
19:47 partner not related but just wanted to say it out loud i see :o
19:47 Gill makes sense
19:47 partner (we often put the reboot as the final step of deploying any box)
19:48 partner anyways, go check things out, i'll go background for a moment and will check back a bit later
19:48 Gill cool thanks partner!
19:48 partner make sure you don't have any CAPS LOCK issue with the root password in case this is a fresh install, or charset, or similar
19:49 Gill ok
20:10 tanuck joined #gluster
20:13 GiLgAmEzH joined #gluster
20:21 gvandeweyer joined #gluster
20:26 Pupeno_ joined #gluster
20:52 plarsen joined #gluster
20:55 Pupeno joined #gluster
21:04 Pupeno_ joined #gluster
21:08 tanuck joined #gluster
21:13 harish joined #gluster
21:23 T3 joined #gluster
21:24 harish joined #gluster
21:27 partner how was the FOSDEM?
21:31 hagarth partner: looks like the gluster talks were very well received at FOSDEM
21:31 hagarth ndevos should post an update on the blog sometime
21:40 ralalala joined #gluster
21:44 partner damn, now that i see the schedules i could have joined.. somehow all the interesting (to me) was well hidden :/
21:44 partner i did have a look couple of times well in advance but just felt like this is a conf for developers
21:46 partner hagarth: glad to hear, IMO there is plenty of room for gluster out there that much nothing else provides
21:46 partner so, nice to have guys there taking talks and keeping office hours
21:48 hagarth partner: yes, working with the broader community is a great mechanism for developers to understand what we need to build
21:48 partner indeed, we need to keep the noise up of us
21:50 partner i've only met ndevos from this community, hope to meet couple of you others out there aswell. just, not sure if gluster will be any part of my work from tomorrow onwards
21:50 ralalala joined #gluster
21:51 partner i hope to have it around me but no idea
21:51 hagarth partner: ah ok, what will you be working on?
21:51 partner i was moved to cloud stuff so ceph is the storage side there
21:52 hagarth partner: that's fun too. If you hop to devconf.cz next weekend, you are more likely to meet a bunch of gluster developers.
21:53 partner hagarth: yeah, noticed that is coming up but i probably cannot arrange it by any means this quickly
21:53 hagarth partner: oh ok
21:54 partner just feels like i've wasted two years trying to understand how gluster works and now i no longer might not need a bit of that info..
21:55 partner surely i've expressed my interest towards learning ceph better, we've just been more file oriented division within the company where gluster fills the gap we lacked just perfectly
21:55 elico joined #gluster
21:56 hagarth partner: you never know. some day it might become handy.
21:57 partner hagarth: yeah, trying to figure out how to keep things "alive" here. if i don't touch something for long enough that becomes irrelevant to me, i just can't be around for all the communities if i don't anymore use the product
21:57 partner i love to help others and learn from the others and run the stuff
21:58 partner i also wanted to go beyond the petabyte limit with our storage but i'm still lacking bytes from there, "just to prove it can be done" :)
22:00 hagarth partner: understandably so.
22:01 partner ohwell, starting new career with ease, couple of days vacation, couple of following days some planning far away from the office. just maybe i might visit the office on friday
22:01 hagarth partner: cool, good luck for that!
22:02 partner hagarth: thanks. maybe its time for new challenges after being almost 9 years in same team :)
22:02 partner not that "my work here is done", it never is but i guess now it finally is anyways ;)
22:03 hagarth partner: possibly yes, rolling stones gather no moss as the old saying goes :)
22:03 partner indeed :)
22:04 partner i still feel a bit sad, it took quite a bit of effort and freetime to become somewhat familiar name here, feels wasting of now leaving all this
22:05 partner for some strange reason this is the most hurting part of this all
22:05 partner not that i'm leaving right away, just the worry of fading away :/
22:06 hagarth partner: do continue to be around here, that way hopefully it does not fade away and we won't miss not having you around here
22:08 partner i will do my best to keep things active here and there, i've expressed my interest on keep going with our storage on the old team and developing it further, i have plans and visions for it
22:08 partner in the best/worst scenario it follows with me to the platform team.. :o
22:09 hagarth partner: nice, feel free to let us know if any assistance is needed from us.
22:09 partner hagarth: sure, thanks
22:11 partner i better get some sleep now, was nice talking to you, hagarth
22:11 hagarth partner: likewise, good night!
22:12 partner turning over to US timezone support, night :) ->
22:13 GiLgAmEzH I found that I have the same problem described here "Files present on the backend but have become invisible from clients" http://www.gluster.org/pipermail/g​luster-users/2011-June/007904.html
22:14 GiLgAmEzH i'm running a rebalance now. Should it fix the problem?
22:15 GiLgAmEzH (sadly I'm on debian 4/gluster 3.1.4)
22:25 Gill Hey partner I seem to have gotten my issue fixed. Have a question though I restarted one node and now when i run gluster volume info one of my bricks is offline but the peers are connected. anyway to fix this?
22:27 Gill The nodes are still getitng the data
22:28 Gill So I gues its ok just not giving the port and says its offlin
22:28 Gill e
22:36 T3 joined #gluster
22:49 harish joined #gluster
22:57 plarsen joined #gluster
22:59 glusterbot News from newglusterbugs: [Bug 1058300] VMs do not resume after paused state and storage connection to a gluster domain (they will also fail to be manually resumed) <https://bugzilla.redhat.co​m/show_bug.cgi?id=1058300>
22:59 glusterbot News from newglusterbugs: [Bug 1188064] log files get flooded when removexattr() can't find a specified key or value <https://bugzilla.redhat.co​m/show_bug.cgi?id=1188064>
23:30 glusterbot News from newglusterbugs: [Bug 1188066] logging improvements in marker translator <https://bugzilla.redhat.co​m/show_bug.cgi?id=1188066>
23:32 DV joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary