Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-06-29

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:10 mribeirodantas joined #gluster
00:16 MrAbaddon joined #gluster
00:22 B21956 joined #gluster
00:27 mckaymatt joined #gluster
00:45 gildub joined #gluster
01:00 mckaymatt joined #gluster
01:22 16WABDEE7 joined #gluster
01:35 suliba joined #gluster
01:39 TheCthulhu joined #gluster
01:40 SpComb joined #gluster
01:50 R0ok_ joined #gluster
01:55 nangthang joined #gluster
03:01 bharata-rao joined #gluster
03:16 Pupeno joined #gluster
03:20 haomaiwa_ joined #gluster
03:36 TheSeven joined #gluster
03:38 Manikandan joined #gluster
03:51 nangthang joined #gluster
03:52 gem joined #gluster
03:54 itisravi joined #gluster
03:58 rejy joined #gluster
04:00 rjoseph joined #gluster
04:19 RedW joined #gluster
04:22 kdhananjay joined #gluster
04:43 anil joined #gluster
04:45 sakshi joined #gluster
04:48 ndarshan joined #gluster
04:49 jiffin joined #gluster
04:50 ramteid joined #gluster
04:58 gem joined #gluster
04:59 vimal joined #gluster
05:00 gem joined #gluster
05:06 cuqa joined #gluster
05:06 cuqa joined #gluster
05:09 nbalacha joined #gluster
05:20 gem joined #gluster
05:30 Pupeno joined #gluster
05:33 jblack joined #gluster
05:34 jblack Hello. I have a dumb question..  I have an ebs snapshot that has a gluster volume on it.   I'm trying to attach that snapshot to an instance, but I don't know how to configure the new instance to use the volume.
05:35 ashiq joined #gluster
05:36 jblack (pardon the acronyms; these are AWS constructs).  To properly abstract this.. I have a drive into a new linux box as /dev/xvdg,  which contains a gluster brick.   I'm not quite sure how to configure the system to gain access to that brick
05:36 Bhaskarakiran joined #gluster
05:37 rafi joined #gluster
05:37 jblack The original system is still up and running.  There are two servers, and the brick is a 1x2 replication
05:40 AndroUser joined #gluster
05:41 AndroUser left #gluster
05:42 gem joined #gluster
05:43 gem joined #gluster
05:47 gem1 joined #gluster
05:47 sakshi joined #gluster
05:49 hchiramm_home joined #gluster
05:55 anrao joined #gluster
06:00 lanning joined #gluster
06:03 deepakcs joined #gluster
06:07 Vortac joined #gluster
06:08 maveric_amitc_ joined #gluster
06:09 soumya_ joined #gluster
06:11 ekuric joined #gluster
06:14 rgustafs joined #gluster
06:18 mike25de left #gluster
06:18 karnan joined #gluster
06:18 atalur joined #gluster
06:26 vimal joined #gluster
06:26 overclk joined #gluster
06:29 siel joined #gluster
06:34 overclk joined #gluster
06:37 jtux joined #gluster
06:38 vmallika joined #gluster
06:40 R0ok_ joined #gluster
06:47 haomaiwa_ joined #gluster
06:54 NTQ joined #gluster
06:55 nangthang joined #gluster
06:56 gem joined #gluster
06:57 pppp joined #gluster
07:05 al0 joined #gluster
07:09 hchiramm joined #gluster
07:10 deniszh joined #gluster
07:11 Slashman joined #gluster
07:12 gem joined #gluster
07:14 gem joined #gluster
07:18 raghu joined #gluster
07:18 Trefex joined #gluster
07:26 kotreshhr joined #gluster
07:37 spalai joined #gluster
07:41 DV joined #gluster
07:43 fsimonce joined #gluster
07:45 curratore joined #gluster
08:02 ctria joined #gluster
08:05 rgustafs joined #gluster
08:07 Arrfab joined #gluster
08:13 ramkrsna joined #gluster
08:13 ramkrsna joined #gluster
08:14 LebedevRI joined #gluster
08:18 mbukatov joined #gluster
08:22 NTQ joined #gluster
08:22 gem joined #gluster
08:30 nsoffer joined #gluster
08:45 Manikandan joined #gluster
08:46 lyang0 joined #gluster
08:52 kovshenin joined #gluster
08:53 kbyrne joined #gluster
08:54 kovshenin joined #gluster
09:07 MrAbaddon joined #gluster
09:08 gem joined #gluster
09:29 gem joined #gluster
09:30 Manikandan_ joined #gluster
09:33 Skinny_ so
09:33 Skinny_ I just spend 3+ days on getting geo-replication working
09:34 Skinny_ only to find out that the regex used for validate host/vol patterns is case sensitive
09:49 kovsheni_ joined #gluster
10:01 glusterbot News from newglusterbugs: [Bug 1236512] DHT + rebalance :-  file permission got changed (sticky bit and setgid is set) after file migration failure <https://bugzilla.redhat.co​m/show_bug.cgi?id=1236512>
10:03 [Enrico] joined #gluster
10:04 Guest63308 joined #gluster
10:04 Guest63308 hi everybody
10:05 Guest63308 i need help, i have a question. anybody can help me please?
10:07 Guest63308 where glusterd stores the information about volumes? i know about /var/lib/glusterd, any other place in filesystem?
10:09 Guest63308 i need reset all configs file , like a fresh install
10:14 Bhaskarakiran joined #gluster
10:16 nsoffer joined #gluster
10:26 jcastill1 joined #gluster
10:31 jcastillo joined #gluster
10:31 glusterbot News from newglusterbugs: [Bug 1112518] [FEAT/RFE] "gluster volume restart" cli option <https://bugzilla.redhat.co​m/show_bug.cgi?id=1112518>
10:32 spalai left #gluster
10:38 gem_ joined #gluster
10:49 lkoranda_ joined #gluster
10:49 marbu joined #gluster
10:52 ira joined #gluster
10:54 mbukatov joined #gluster
10:57 [7] joined #gluster
10:59 lkoranda joined #gluster
11:04 spalai joined #gluster
11:04 spalai left #gluster
11:10 ira joined #gluster
11:13 curratore joined #gluster
11:17 soumya joined #gluster
11:18 jcastill1 joined #gluster
11:23 jcastillo joined #gluster
11:26 yannick joined #gluster
11:27 yannick I'm trying to start glusterd, but glusterd crashed with the error "0-management: Initialization of volume 'management' failed, review your volfile again". Any hints? I haven't altered the volfile.
11:28 rafi1 joined #gluster
11:32 glusterbot News from newglusterbugs: [Bug 1223839] /lib64/libglusterfs.so.0(+0x21725)[0x7f248655a725] ))))) 0-rpc_transport: invalid argument: this <https://bugzilla.redhat.co​m/show_bug.cgi?id=1223839>
11:40 tanuck joined #gluster
11:43 kovshenin joined #gluster
11:44 meghanam joined #gluster
11:47 Guest63308 left #gluster
11:52 rafi joined #gluster
11:54 kotreshhr left #gluster
11:58 anrao joined #gluster
12:03 chirino joined #gluster
12:03 unclemarc joined #gluster
12:08 jtux joined #gluster
12:13 jiffin1 joined #gluster
12:17 rjoseph joined #gluster
12:20 [Enrico] joined #gluster
12:21 bene2 joined #gluster
12:21 firemanxbr joined #gluster
12:27 anrao joined #gluster
12:28 ernetas left #gluster
12:31 jiffin joined #gluster
12:41 itisravi joined #gluster
12:46 anrao joined #gluster
12:50 wkf joined #gluster
12:55 aaronott joined #gluster
12:58 kdhananjay joined #gluster
13:07 wkf joined #gluster
13:10 shaunm_ joined #gluster
13:16 hamiller joined #gluster
13:24 julim joined #gluster
13:31 plarsen joined #gluster
13:40 georgeh-LT2 joined #gluster
13:43 rwheeler joined #gluster
13:55 bennyturns joined #gluster
13:55 rveroy joined #gluster
14:05 jrm16020 joined #gluster
14:15 harish joined #gluster
14:21 plarsen joined #gluster
14:29 wushudoin joined #gluster
14:30 akay1 anyone seen a problem with the trashcan where deleting a file doesnt show any file on the .trashcan mount point (but it does create the appropriate folder) and the file exists on the bricks but as 0 size?
14:31 akay1 and ------T permission on the brick file too
14:31 glusterbot akay1: ----'s karma is now -1
14:31 shyam joined #gluster
14:32 ndevos that poor ----
14:32 glusterbot ndevos: --'s karma is now -1
14:32 akay1 yeah whoooops
14:32 ndevos anoopcs: something you know more about? ^^
14:33 akay1 seeing a lot of "no subvolume for hash" errors in mount log
14:34 akay1 [2015-06-29 10:53:04.280998] W [dht-layout.c:189:dht_layout_search] 0-gv0-dht: no subvolume for hash (value) = 480949771
14:34 akay1 [2015-06-29 10:53:04.281431] W [MSGID: 108008] [afr-read-txn.c:241:afr_read_txn] 0-gv0-replicate-6: Unreadable subvolume -1 found with event generation 2. (Possible split-brain)
14:36 mribeirodantas joined #gluster
14:43 akay1 are most of the active guys here based in the US?
14:46 sysconfig joined #gluster
14:47 maveric_amitc_ joined #gluster
14:48 atalur joined #gluster
14:50 squizzi_ joined #gluster
14:50 jcastill1 joined #gluster
14:55 jcastillo joined #gluster
15:02 XpineX joined #gluster
15:06 cholcombe joined #gluster
15:09 B21956 joined #gluster
15:13 jotun joined #gluster
15:13 jblack joined #gluster
15:18 jmarley joined #gluster
15:25 al joined #gluster
15:25 bennyturns joined #gluster
15:27 NTQ joined #gluster
15:27 jotun joined #gluster
15:29 meghanam joined #gluster
15:31 vmallika joined #gluster
15:35 PeterA joined #gluster
15:35 al joined #gluster
15:36 mckaymatt joined #gluster
15:36 PeterA I upgraded to 3.5.4 and seems like soooooo much cleaner on the brick log….how should I rotate the log daily ?
15:39 soumya joined #gluster
15:50 jiffin joined #gluster
15:50 jblack joined #gluster
15:52 vovcia hi / what sis INODELK and why it takes so much time?
15:53 vovcia a have like bazylion seconds waiting on distributed replicated cluster for INODELK:      99.32  292479.79 us      15.00 us 1974559.00 us            300     INODELK
16:03 jbrooks joined #gluster
16:12 jotun joined #gluster
16:15 rwheeler joined #gluster
16:28 calavera joined #gluster
16:32 atalur joined #gluster
16:34 Sjors Hi all
16:34 rwheeler joined #gluster
16:34 Sjors I'm trying to make a 1-brick Gluster volume into a 2-brick Replicate volume
16:35 [7] Sjors: IIUC that's not supported, had massive issues with that as well
16:35 Sjors meh
16:36 Sjors I just added the brick, but it's a Distribute now, and when I try to remove the brick the daemon crashes
16:36 Sjors super-great
16:36 [7] if you add it as a replica brick, it will just believe that there's no data on the volume anymore... not much better ;)
16:37 Sjors [7]: I did a sync of the complete volume first
16:37 Sjors [7]: so it would be OK hopefully
16:37 [7] oh... that might be part of the problem actually
16:38 Sjors ok nice, the daemon doesn't come up anymore
16:38 cholcombe joined #gluster
16:38 Sjors immediately crashes after a restart :-(
16:39 Sjors I have a stack trace
16:39 Sjors this is Gluster 3.7.1
16:39 Sjors it's in gluster_volume_defrag_restart
16:40 Sjors looks like this bug: https://bugzilla.redhat.co​m/show_bug.cgi?id=1228093
16:40 glusterbot Bug 1228093: unspecified, unspecified, ---, spalai, POST , Glusterd crash
16:40 Sjors with this fix: http://review.gluster.org/#/c/11090/
16:45 anoopcs akay1, Are you online?
16:46 anoopcs akay1, Was there any rebalance process running in parallel?
16:48 anoopcs ndevos, Thanks for making note of this issue related to trash.
16:48 Sjors doing a build with that commit cherry-picked now
16:48 Sjors hopefully the daemon will come up, I can remove the brick again, and add it with "replicas 2"
16:48 Sjors then it will add it as a replica
16:48 Sjors then I can do a full self-heal and restart the volume
16:48 Sjors hopefully
16:51 Sjors there
16:51 Sjors with commit cherry-picked, gluster comes up correctly
16:59 mckaymatt joined #gluster
17:01 curratore hello, anyone is using glusterfs on docker containers?
17:02 cholcombe joined #gluster
17:02 Sjors here we go. brick could be removed, brick could be re-added with replicate 2
17:03 Sjors lots of I/O errors and a self-heal running
17:03 Sjors but hopefully gluster will be able to figure it out by itself from here
17:03 Bhaskarakiran joined #gluster
17:04 curratore Sjors: the problem was the version then?
17:04 Sjors curratore: well, I had a bunch of problems; the one that's fixed now is the daemon crash (fixed by cherry-picking a commit in master that will be in 3.7.2)
17:04 Sjors curratore: the original problem, adding a replicate brick, is being worked on atm
17:04 curratore I see
17:05 Sjors curratore: the brick is added, but I'm getting lots of I/O errors browsing through the Gluster mount point of the volume
17:05 Sjors curratore: it's pretty big, I hope a self-heal will eventually fix it
17:05 Sjors (a few terabytes, so not enormous)
17:05 curratore Sjors: I have a similar problem but adding to problem docker containers :)
17:05 Sjors curratore: what's the problem?
17:06 curratore I am updating to 3.7.2 and redoing the containers, let’s see if I can remove the second brick and fix it
17:06 Sjors oh, 3.7.2 is already released
17:06 Sjors didn't know htat
17:06 curratore Sjors: I added a third brick to 2 replica, after remove old one
17:07 curratore Sjors: after I started to add data, the new data is sync, but no news about the old, so I have a brick with 8TB and other with 8GB :D
17:07 Sjors ah yeah
17:08 mckaymatt joined #gluster
17:08 Sjors as far as I could find, Gluster does not automatically migrate anything from the old bricks to the new one
17:08 curratore tried to sync, and says sync success
17:08 Sjors but I couldn't find any useful resources on it
17:08 curratore then how i sync the old one with the new one?
17:09 calavera_ joined #gluster
17:09 curratore I thought that heal will do
17:09 Sjors I just ran an `rsync` before adding the brick
17:09 Sjors so at least the initial data is synchronized
17:10 Sjors and then I hope a self-heal will do the rest
17:10 curratore mmm
17:11 curratore I could lost my 8gb but not 8Tb
17:11 curratore :D
17:13 curratore Sjors: with rsync at least you have the same content on both bricks I guess
17:13 Sjors yeah
17:13 Sjors that was my though
17:13 Sjors t
17:14 curratore how is going the heal?
17:15 curratore I thought the mapper was right tool to do it, and working itself to map all the info on the volumes
17:19 hagarth joined #gluster
17:19 Sjors I seem to be having permissions problems
17:19 Sjors maybe some extended permissions didn't get copied by rsync -a
17:19 Sjors -a does not include -E
17:21 curratore I see
17:21 Sjors I wonder why there are no resources about extending replicate gluster volumes
17:22 Sjors only distribute volumes
17:24 curratore That was the reason why I thought that heal will do it
17:25 curratore you removed the brick from replica detached from pool and after added to pool and added to replica?
17:26 Sjors I had no brick to remove
17:26 Sjors so I just added to pool and added to replica
17:26 Sjors it was a new brick
17:26 Sjors I used to have only one brick, then I had two
17:26 Sjors with data rsynced over it
17:26 curratore ah
17:27 curratore ok thx :)
17:30 Sjors hopefully my rsync -aE will help
17:34 corretico joined #gluster
17:46 firemanxbr joined #gluster
17:52 firemanxbr joined #gluster
18:07 mckaymatt joined #gluster
18:12 anoopcs akay1, Can you please drop a mail to gluster-users(regarding the trash feature issue) in case I miss your reply on channel here?
18:15 calavera joined #gluster
18:18 coredump joined #gluster
18:23 soumya joined #gluster
18:24 Rapture joined #gluster
18:25 ramkrsna joined #gluster
18:39 corretico joined #gluster
18:55 vimal joined #gluster
19:02 MrAbaddon joined #gluster
19:07 nsoffer joined #gluster
19:21 bennyturns joined #gluster
19:30 aaronott joined #gluster
19:45 firemanxbr_ joined #gluster
19:49 calavera joined #gluster
19:53 badone__ joined #gluster
20:10 theron joined #gluster
20:10 DV joined #gluster
20:27 DV_ joined #gluster
20:27 hchiramm_home joined #gluster
20:33 glusterbot News from newglusterbugs: [Bug 1229422] server_lookup_cbk erros on LOOKUP only when quota is enabled <https://bugzilla.redhat.co​m/show_bug.cgi?id=1229422>
20:46 NTQ joined #gluster
20:49 wushudoin| joined #gluster
20:49 deniszh joined #gluster
20:54 wushudoin| joined #gluster
20:55 nsoffer joined #gluster
21:17 mribeirodantas joined #gluster
21:36 calavera joined #gluster
21:36 B21956 joined #gluster
21:44 wkf joined #gluster
21:45 B21956 joined #gluster
21:56 Pupeno_ joined #gluster
21:58 calavera joined #gluster
22:02 kripto joined #gluster
22:03 kripto Greetings.. I'm seeing some oddness in our Gluster environment. I'm seeing multi gig log files filling with 2 lines ... Can someone point me to the right place to address this issue? The lines are "[2015-06-29 21:57:27.879419] E [rpcsvc.c:195:rpcsvc_program_actor] 0-rpc-service: RPC Program procedure not available for procedure 2 in GF-DUMP
22:03 kripto [2015-06-29 21:57:27.879458] E [rpcsvc.c:450:rpcsvc_check_and_reply_error] 0-rpcsvc: rpc actor failed to complete successfully"
22:12 malevolent joined #gluster
22:12 xavih joined #gluster
22:14 papamoose1 joined #gluster
22:35 jmarley joined #gluster
22:58 B21956 joined #gluster
23:06 _joel joined #gluster
23:29 _joel hello.  we had a slight boo boo and deleted files directly from the brick.  luckily we dont need to recover anything but would like to recover the disk space.  is it as simple as running something like this?  "find .glusterfs/ -type f -links 1 -exec rm {} \;"
23:43 kripto Greetings.. I'm seeing some oddness in our Gluster environment. I'm seeing multi gig log files filling with 2 lines ... Can someone point me to the right place to address this issue? The lines are "[2015-06-29 21:57:27.879419] E [rpcsvc.c:195:rpcsvc_program_actor] 0-rpc-service: RPC Program procedure not available for procedure 2 in GF-DUMP
23:43 kripto [2015-06-29 21:57:27.879458] E [rpcsvc.c:450:rpcsvc_check_and_reply_error] 0-rpcsvc: rpc actor failed to complete successfully"
23:45 Pupeno joined #gluster
23:49 jblack joined #gluster
23:58 maveric_amitc_ joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary