Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-03-25

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:03 bennyturns joined #gluster
00:09 Rapture joined #gluster
00:10 Lee-- joined #gluster
00:55 corretico joined #gluster
01:01 siel joined #gluster
01:05 bala joined #gluster
01:14 suliba joined #gluster
01:24 Pupeno_ joined #gluster
01:25 Gue______ joined #gluster
02:08 haomaiwa_ joined #gluster
02:13 dgandhi joined #gluster
02:14 ira joined #gluster
02:15 DV joined #gluster
02:22 harish_ joined #gluster
02:25 ira joined #gluster
02:27 nangthang joined #gluster
02:38 plarsen joined #gluster
02:55 ppai joined #gluster
03:05 bharata-rao joined #gluster
03:10 dusmant joined #gluster
03:29 bala joined #gluster
03:38 hchiramm__ joined #gluster
03:39 rafi joined #gluster
03:44 shubhendu joined #gluster
03:52 ilbot3 joined #gluster
03:52 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
03:52 lalatenduM joined #gluster
03:56 atinmu joined #gluster
03:57 sripathi joined #gluster
04:08 kdhananjay joined #gluster
04:10 o5k__ joined #gluster
04:11 kanagaraj joined #gluster
04:18 dusmant joined #gluster
04:24 spandit joined #gluster
04:29 kripper1 joined #gluster
04:29 kripper1 Hi
04:29 glusterbot kripper1: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
04:29 anoopcs joined #gluster
04:29 kripper1 I have a architecture question
04:29 jiffin joined #gluster
04:30 kripper1 I saw that gluster's performance is limited to the bandwith to the peers
04:31 kripper1 in theory, if a client is accesing a volume that has a replica brick on the same machine
04:31 kripper1 it could write at the speed of the local disk
04:32 kripper1 and send delayed data packets to other peers
04:32 kripper1 but this doesn't seem to be the case
04:32 kripper1 as the write speed seems to be limited by the bandwith with other peers
04:34 kripper1 can this kind of local-disk-write-speed  + delayed replication to other peers be configured?
04:35 Pupeno joined #gluster
04:35 kripper1 is there a design restriction or some reason why all peers must be in sync?
04:37 kripper1 is geo-replication the answer?
04:37 kripper1 (a readonly replication is ok)
04:37 kripper1 it would also be cool if a normal replica volume could be reconfigured to georeplication
04:38 kripper1 on the go
04:40 RameshN joined #gluster
04:41 ppai joined #gluster
04:43 nbalacha joined #gluster
04:45 anoopcs joined #gluster
04:51 bala joined #gluster
04:52 DV joined #gluster
04:54 soumya joined #gluster
04:58 kripper1 left #gluster
05:00 aravindavk joined #gluster
05:05 kumar joined #gluster
05:07 schandra joined #gluster
05:11 smohan joined #gluster
05:18 karnan joined #gluster
05:18 vimal joined #gluster
05:19 anil_ joined #gluster
05:20 nishanth joined #gluster
05:20 ndarshan joined #gluster
05:25 deepakcs joined #gluster
05:26 T3 joined #gluster
05:26 hgowtham joined #gluster
05:28 aravindavk joined #gluster
05:29 rjoseph joined #gluster
05:31 anoopcs joined #gluster
05:32 kshlm joined #gluster
05:35 kovshenin joined #gluster
05:37 karnan_ joined #gluster
05:38 suliba joined #gluster
05:39 aravindavk joined #gluster
05:40 gem joined #gluster
05:41 aravindavk joined #gluster
05:45 XpineX joined #gluster
05:46 kovshenin joined #gluster
05:47 ashiq joined #gluster
05:48 hgowtham joined #gluster
05:48 92AAAWFTA joined #gluster
05:50 hagarth joined #gluster
05:52 dusmant joined #gluster
06:04 o5k__ joined #gluster
06:06 atalur joined #gluster
06:11 hgowtham joined #gluster
06:15 overclk joined #gluster
06:16 maveric_amitc_ joined #gluster
06:16 RameshN joined #gluster
06:21 nshaikh joined #gluster
06:25 soumya joined #gluster
06:26 meghanam joined #gluster
06:26 soumya joined #gluster
06:39 Guest29776 joined #gluster
06:43 karnan joined #gluster
06:44 karnan_ joined #gluster
06:56 coredump joined #gluster
06:56 bala joined #gluster
06:57 dusmant joined #gluster
07:00 prg3 joined #gluster
07:01 DV joined #gluster
07:02 Arrfab joined #gluster
07:11 raghu joined #gluster
07:13 jtux joined #gluster
07:14 Philambdo joined #gluster
07:15 anrao joined #gluster
07:16 T3 joined #gluster
07:20 Philambdo1 joined #gluster
07:25 pjschmitt joined #gluster
07:28 glusterbot News from newglusterbugs: [Bug 1205540] Data Classification:3.7.0:data loss:detach-tier not flushing data to cold-tier <https://bugzilla.redhat.co​m/show_bug.cgi?id=1205540>
07:30 dusmant joined #gluster
07:30 badone joined #gluster
07:43 Philambdo joined #gluster
07:45 baoboa joined #gluster
07:49 bala joined #gluster
07:49 ppai joined #gluster
07:50 anrao joined #gluster
07:54 LebedevRI joined #gluster
07:58 glusterbot News from newglusterbugs: [Bug 1205545] Effect of Trash translator over CTR translator <https://bugzilla.redhat.co​m/show_bug.cgi?id=1205545>
08:01 hgowtham_ joined #gluster
08:11 corretico joined #gluster
08:15 o5k joined #gluster
08:16 RameshN joined #gluster
08:22 [Enrico] joined #gluster
08:30 glusterbot News from resolvedglusterbugs: [Bug 1204008] Sqlite3 library required even after disable-tiering is given during configure <https://bugzilla.redhat.co​m/show_bug.cgi?id=1204008>
08:36 Pupeno joined #gluster
08:36 Pupeno joined #gluster
08:39 fsimonce joined #gluster
08:42 RameshN joined #gluster
08:47 ktosiek joined #gluster
08:52 dusmant joined #gluster
09:05 Pupeno joined #gluster
09:05 Pupeno joined #gluster
09:05 T3 joined #gluster
09:06 Norky joined #gluster
09:11 deniszh joined #gluster
09:11 liquidat joined #gluster
09:12 yossarianuk joined #gluster
09:17 yossarianuk hi - we have a gluster share as a simple offsite backup - it is replicating over data centres - we want it to be an async share so that someone uploading to a directory doesn't have to wait for the file to be replicated
09:17 yossarianuk am I right in thinking i need to enable geo-replication
09:20 smohan_ joined #gluster
09:22 ndevos yossarianuk: that is correct, geo-replication is the way to go for that
09:22 deniszh joined #gluster
09:25 Slashman joined #gluster
09:30 hgowtham_ joined #gluster
09:31 dusmant joined #gluster
09:34 ira joined #gluster
09:34 anrao joined #gluster
09:34 ppai joined #gluster
09:35 kaushal_ joined #gluster
09:36 deniszh1 joined #gluster
09:37 ira joined #gluster
09:38 ira joined #gluster
09:40 yossarianuk ndevos: cheers
09:43 [Enrico] joined #gluster
09:45 anil_ joined #gluster
09:46 kanagaraj joined #gluster
09:50 atinmu joined #gluster
09:51 rjoseph joined #gluster
09:54 navid__ joined #gluster
09:55 navid__ joined #gluster
09:55 hgowtham joined #gluster
09:55 nshaikh joined #gluster
10:04 corretico joined #gluster
10:07 tessier joined #gluster
10:09 kdhananjay joined #gluster
10:09 anoopcs joined #gluster
10:12 anrao joined #gluster
10:12 badone joined #gluster
10:13 [Enrico] joined #gluster
10:17 vijaykumar joined #gluster
10:24 deniszh joined #gluster
10:39 schandra joined #gluster
10:42 dusmant joined #gluster
10:44 harish_ joined #gluster
10:52 kshlm punit_, Awesome news on your problem resolution
10:53 Pupeno joined #gluster
10:53 Pupeno joined #gluster
10:54 T3 joined #gluster
10:55 ppai joined #gluster
10:56 bala joined #gluster
10:57 bene2 joined #gluster
10:58 kaushal_ joined #gluster
10:58 hgowtham joined #gluster
10:58 glusterbot News from newglusterbugs: [Bug 1205624] Data Classification:rebalance fails on a tiered volume <https://bugzilla.redhat.co​m/show_bug.cgi?id=1205624>
10:59 jmarley joined #gluster
10:59 kovshenin joined #gluster
11:03 anrao joined #gluster
11:07 kanagaraj joined #gluster
11:09 atinmu joined #gluster
11:19 nishanth joined #gluster
11:20 dusmant joined #gluster
11:22 rjoseph joined #gluster
11:26 corretico joined #gluster
11:26 deniszh1 joined #gluster
11:30 kdhananjay joined #gluster
11:31 smohan joined #gluster
11:32 saurabh joined #gluster
11:34 soumya_ joined #gluster
11:35 lalatenduM joined #gluster
11:40 hagarth joined #gluster
11:45 ppai joined #gluster
11:48 meghanam joined #gluster
11:55 calum_ joined #gluster
11:59 Debloper joined #gluster
12:01 ndevos REMINDER: Gluster Community meeting starts now in #gluster-meeting
12:05 jdarcy joined #gluster
12:05 [Enrico] joined #gluster
12:06 shaunm joined #gluster
12:09 gem joined #gluster
12:11 nbalacha joined #gluster
12:12 meghanam joined #gluster
12:14 itisravi joined #gluster
12:14 nottc joined #gluster
12:16 Guest11316 joined #gluster
12:18 rjoseph joined #gluster
12:22 B21956 joined #gluster
12:26 [Enrico] joined #gluster
12:27 lalatenduM joined #gluster
12:28 Gill joined #gluster
12:30 Philambdo joined #gluster
12:43 T3 joined #gluster
12:44 [Enrico] joined #gluster
12:51 wkf joined #gluster
12:53 dgandhi joined #gluster
12:55 rafi joined #gluster
12:58 dusmant joined #gluster
12:58 o5k joined #gluster
13:02 julim joined #gluster
13:02 Slashman_ joined #gluster
13:05 Pupeno joined #gluster
13:12 rjoseph joined #gluster
13:13 maveric_amitc_ joined #gluster
13:17 Pupeno joined #gluster
13:17 Pupeno joined #gluster
13:26 o5k_ joined #gluster
13:29 glusterbot News from newglusterbugs: [Bug 1192378] Disperse volume: client crashed while running renames with epoll enabled <https://bugzilla.redhat.co​m/show_bug.cgi?id=1192378>
13:31 glusterbot News from resolvedglusterbugs: [Bug 884455] storage domain becomes inactive when rebalance is issued and when subvols-per-directory is set to 1 <https://bugzilla.redhat.com/show_bug.cgi?id=884455>
13:31 jmarley joined #gluster
13:33 hamiller joined #gluster
13:33 T3 joined #gluster
13:34 georgeh-LT2 joined #gluster
13:42 diegows joined #gluster
13:45 theron joined #gluster
13:50 corretico joined #gluster
13:58 bene2 joined #gluster
13:59 glusterbot News from newglusterbugs: [Bug 1205709] ls command blocked when one brick disconnect, even reconnect. <https://bugzilla.redhat.co​m/show_bug.cgi?id=1205709>
13:59 glusterbot News from newglusterbugs: [Bug 1205715] log files get flooded when removexattr() can't find a specified key or value <https://bugzilla.redhat.co​m/show_bug.cgi?id=1205715>
14:02 atinmu joined #gluster
14:05 Pupeno joined #gluster
14:05 Pupeno joined #gluster
14:09 DV joined #gluster
14:10 lpabon joined #gluster
14:12 T3 hey guys, I have a Replicated Volume gluster set up, and some doubts
14:12 T3 Replicated Volume: https://github.com/gluster/glusterfs/blob/ma​ster/doc/admin-guide/en-US/markdown/admin_se​tting_volumes.md#creating-replicated-volumes
14:13 T3 This is a 2 servers, with direct networking connection (minimal latency)
14:14 T3 yet, I see bad performance on gluster reads timing
14:15 T3 Replicated Volumes seems to be synchronous. What would be my options for an asynchronous setup? Only geo-replication? (Even though my setup is not really geo distributed?)
14:15 wushudoin joined #gluster
14:20 harish_ joined #gluster
14:20 deniszh joined #gluster
14:22 Norky joined #gluster
14:27 roost joined #gluster
14:27 T3 Any chance I can have an asynchronous Replicated Volume?
14:30 DV joined #gluster
14:30 plarsen joined #gluster
14:30 kshlm joined #gluster
14:32 ecchcw joined #gluster
14:32 ecchcw Hello gluster folks
14:32 ecchcw I've got a question about performance
14:34 ecchcw I'm running 4 c4 large EC2 instances (EBS optimised) with 100 gb/3000 provisioned iops  ebs volumes
14:34 hamiller T3, On a replicated volume the read FOP does not need ack from both nodes, so you should get the data from the fastest node. WRITES do need ACK, so they are limited by slowest node
14:35 ecchcw set up a a 2 stripe 2 replicate gluster volume
14:35 ecchcw *as a
14:35 ecchcw I'm not seeing anything better than 60 mb/s read/write from clients
14:37 atinmu joined #gluster
14:40 ecchcw Sorry, let me correct that
14:41 ecchcw 60mb/sec writes, 128 mb/sec reads
14:48 dbruhn joined #gluster
14:52 firemanxbr_ joined #gluster
14:56 atinmu joined #gluster
14:56 shubhendu joined #gluster
15:04 ctria joined #gluster
15:30 Debloper joined #gluster
15:38 o5k__ joined #gluster
15:45 bennyturns joined #gluster
15:50 firemanxbr joined #gluster
15:54 bala joined #gluster
15:54 deniszh1 joined #gluster
15:57 ndevos joined #gluster
15:57 ndevos joined #gluster
16:02 glusterbot News from resolvedglusterbugs: [Bug 895528] 3.4 Alpha Tracker <https://bugzilla.redhat.com/show_bug.cgi?id=895528>
16:02 glusterbot News from resolvedglusterbugs: [Bug 884597] dht linkfile are created with different owner:group than that source(data) file in few cases <https://bugzilla.redhat.com/show_bug.cgi?id=884597>
16:17 soumya_ joined #gluster
16:32 vimal joined #gluster
16:34 JoeJulian ecchcw: That's gigabit speeds. I assume if you want more you would have to talk to amazon about that.
16:57 T3 joined #gluster
17:05 ecchcw thanks JoeJulian
17:06 ecchcw I've been doing some more testing, added 4 more bricks to the cluster - got double the write and half the read speeds
17:06 ecchcw I'm trying it out with s replicate 2 stripe 4 now
17:06 ecchcw *with a
17:06 JoeJulian @stripe
17:06 glusterbot JoeJulian: Please see http://joejulian.name/blog/sho​uld-i-use-stripe-on-glusterfs/ about stripe volumes.
17:10 anoopcs joined #gluster
17:14 jiffin joined #gluster
17:16 pookey joined #gluster
17:18 aea joined #gluster
17:18 ecchcw JoeJulian: I've read that  and stripe seems to be comaptible with my expected workload. I'm planing to run fatcache on top of it
17:19 ecchcw great article btw
17:24 Rapture joined #gluster
17:28 aea left #gluster
17:28 aea joined #gluster
17:35 kovshenin joined #gluster
17:55 rafi joined #gluster
18:02 daMaestro joined #gluster
18:07 the-me joined #gluster
18:19 anrao joined #gluster
18:27 roost joined #gluster
18:27 daMaestro joined #gluster
18:32 jobewan joined #gluster
18:39 lalatenduM joined #gluster
18:44 deZillium joined #gluster
18:55 aea joined #gluster
18:56 ernetas joined #gluster
18:56 ernetas Hey guys.
18:57 ernetas Why am I getting " failed to fetch volume file" when trying to mount the volume after upgrading from 3.5 to 3.6?
18:57 JoeJulian fpaste a client log
19:00 deniszh joined #gluster
19:01 ernetas JoeJulian: http://fpaste.org/202879/73100631/
19:01 ernetas Sorry
19:01 ernetas gluster volume info and status shows all nodes online and operational
19:02 ernetas http://fpaste.org/202882/42731014/ - everything in a single paste
19:03 deniszh joined #gluster
19:04 JoeJulian "Volume Name: files" "mount -t glusterfs files1:/www /www"
19:04 rwheeler joined #gluster
19:04 JoeJulian wrong volume name in your mount command
19:08 ernetas JoeJulian: same error.
19:08 ernetas (except for "key:files" now)
19:15 JoeJulian ernetas: sorry, doing 4 things at once... 5 if you count this. Let's see glusterd.vol.log from files1
19:16 Rydekull joined #gluster
19:18 ernetas JoeJulian: http://fpaste.org/202888/14273110/
19:18 ernetas added glusterd.vol, too... is it okay?
19:18 deniszh joined #gluster
19:21 JoeJulian did you change it?
19:22 ernetas Neup, but I think apt-get did and I didn't have a backup, so I was wondering if it had to include anything about my volumes
19:23 JoeJulian Oh, no. it's a very generic config for starting glusterd only.
19:23 JoeJulian All the state data is in /var/lib/glusterd
19:23 ernetas Okay.
19:23 JoeJulian hrm...
19:24 JoeJulian Could the hostname be resolving incorrectly on the client? The only guess I have is that somehow the client is talking to a server that doesn't have that volume defined.
19:26 ernetas Hm, neup, ping shows the correct IP that gluster is listening on. Also, nothing from networking has changed.
19:32 deniszh joined #gluster
19:34 JoeJulian can you mount the volume on one of the servers from localhost?
19:36 ernetas (just FYI, that is on the same machine, if that matters) But anyways, using "mount -t glusterfs 127.0.0.1:files /www" shows the same in logs
19:37 gnudna joined #gluster
19:37 gnudna joined #gluster
19:38 gnudna ?
19:38 gnudna left #gluster
19:38 gnudna joined #gluster
19:40 JoeJulian If it were me, at this point, I would pkill -f gluster then start glusterd again. I know the log says it's running the correct version but something's just wierd.
19:40 ernetas I tried rebooting the whole system, even shutting down all systems at once and bringing them all at once. Should I try killing, still?
19:40 JoeJulian no.
19:42 JoeJulian kill glusterd. in one window run "glusterd --debug" and try your mount in another and see if there's any clue in the debug output.
19:44 JoeJulian and if my luck for the day holds out, it'll work just fine and we'll still be no closer to figuring it out.
19:44 gnudna at least your honest ;)
19:47 ernetas http://fpaste.org/202903/14273128/ - here's what glusterd throws at the moment of mounting
19:51 ernetas And I compared /etc/glusterfs/glusterd.vol to other servers (I have 2 isolated gluster clusters, one is now 3.6, other is 3.5), it's exactly the same.
19:56 vipulnayyar joined #gluster
20:01 JoeJulian ernetas: I wonder... try "gluster system getspec files" and see if it works.
20:02 ernetas Outputs nothing, exit status 240.
20:10 lpabon joined #gluster
20:10 shaunm joined #gluster
20:10 gnudna JoeJulian was that supposed to output anything?
20:10 JoeJulian yes, it should have put the volfile.
20:10 gnudna i too have the same exit code on ver 3.6.2
20:10 JoeJulian unless they removed it in 3.6...
20:11 JoeJulian works in my 3.6.
20:11 JoeJulian of course "files" is the name of the volume.
20:12 gnudna yeah
20:12 gnudna yeah gluster system getspec kvm
20:12 aravindavk joined #gluster
20:12 gnudna figured it out just now
20:18 tanuck joined #gluster
20:21 deniszh joined #gluster
20:23 JoeJulian aha!
20:24 JoeJulian ernetas: somewhere during the upgrade process I suspect your .vol files were deleted (or renamed). Recreate them by stopping glusterd then running: glusterd --xlator-option "*.upgrade=on" -N
20:30 tessier joined #gluster
20:31 zerick joined #gluster
20:31 deniszh joined #gluster
20:39 aea joined #gluster
20:42 anrao joined #gluster
20:48 victori joined #gluster
21:01 gnudna left #gluster
21:03 ernetas JoeJulian: thanks, that worked. I found the same command somewhere in the mailing lists, but I guess there was a typo in them or I forgot to shut it down first, as it threw up with not nowing "xlator-option" :) .
21:07 JoeJulian That was a fun puzzle. :D
21:07 JoeJulian Glad I could help.
21:20 badone joined #gluster
21:25 wkf joined #gluster
21:25 quique JoeJulian: i have a volume vol1 with bricks in a replica 3 on gluster1,gluster2,gluster3 stuff is messed up, if i have a new volume on a new cluster volnew and i mount it on gluster1,gluster2, and gluster3 and rsync their bricks into the volnew avoiding rsyncing the .glusterfs dir, volnew should be good to go right?
21:25 JoeJulian If I were to do that, I would only do the first brick and let self-heal do the other two.
21:26 JoeJulian Otherwise you're going to have potential issues with conflicting metadata
21:27 quique so copying to the volnew wont write proper metadata?
21:27 quique as it gets put on each brick?
21:28 JoeJulian Oh, if you're copying to the *volume* then you're good. I read that wrong.
21:38 o5k joined #gluster
21:42 epaphus joined #gluster
21:42 epaphus Hello
21:42 glusterbot epaphus: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
21:46 wkf joined #gluster
21:51 captainflannel joined #gluster
21:52 captainflannel I'm having some odd permission issue with a gluster volume and samba glustervfs
21:52 captainflannel brand new volume but when shared through samba no one has permissions
21:53 captainflannel can mount the volume via client and then share to samba via mount point and that works, but trying to get the vfs to work
21:57 captainflannel samba is bound to our ad btw..
22:08 captainflannel got it working, adjusting my smb.conf to follow symlinks resolved it..
22:12 JoeJulian captainflannel: Cool, glad you got it figured out.
22:14 JoeJulian @learn samba vfs as If none of your users have permissions, try "follow symlinks = yes" in your smb.conf
22:14 glusterbot JoeJulian: The operation succeeded.
22:16 captainflannel yup it was the symlinks issue
22:17 JoeJulian Yeah, took your new knowledge and added it to glusterbot's factoids. :D
22:17 JoeJulian That way I don't have to remember stuff I don't use.
22:17 captainflannel heh
22:17 captainflannel i guess the vfs acts as a symlink?
22:18 JoeJulian I've never noticed. The samba server I used to run I no longer have access to or I'd look at how I configured it.
22:18 ThatGraemeGuy joined #gluster
22:21 captainflannel well thanks i'm up and running now :)
22:21 JoeJulian I kind-of doubt I would have enabled follow symlinks though... weird.
22:22 captainflannel i copied a smb.conf from somehwere and was specifically denying followsymlinks
22:24 RoyK joined #gluster
22:29 _Bryan_ joined #gluster
22:32 dbruhn joined #gluster
22:47 rotbeard joined #gluster
22:51 T3 joined #gluster
22:54 jbrooks joined #gluster
23:41 T3 joined #gluster
23:49 theron joined #gluster
23:53 theron joined #gluster
23:56 chuz04arley joined #gluster
23:58 chuz04arley joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary