Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2016-01-21

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:01 haomaiwang joined #gluster
00:07 JoeJulian Sorry, had to step out of my office for a bit.
00:08 JoeJulian cpetersen: Yes, gluster with nfs can be HA. Use corosync + pacemaker with ganesha-nfs.
00:09 JoeJulian And I have no idea why anyone but a salesman for linbit would recommend drbd.
00:09 cpetersen Thanks for the answers.  I will use them wisely.  :)
00:11 JoeJulian Good luck.
00:15 tree333 joined #gluster
00:59 gildub joined #gluster
01:01 haomaiwa_ joined #gluster
01:16 zhangjn joined #gluster
01:20 calavera joined #gluster
01:21 mrrrgn joined #gluster
01:29 Lee1092 joined #gluster
01:51 zhangjn joined #gluster
01:51 aravindavk joined #gluster
01:57 calavera joined #gluster
01:58 haomaiwa_ joined #gluster
02:00 badone joined #gluster
02:01 7GHAB2XQU joined #gluster
02:05 ahino joined #gluster
02:06 hagarth joined #gluster
02:16 zhangjn joined #gluster
02:40 badone joined #gluster
02:46 auzty joined #gluster
02:48 ilbot3 joined #gluster
02:48 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
02:49 7GHAB2X3O joined #gluster
03:01 gem joined #gluster
03:01 mobaer joined #gluster
03:03 EinstCrazy joined #gluster
03:04 bharata-rao joined #gluster
03:06 harish joined #gluster
03:11 natarej joined #gluster
03:23 hagarth joined #gluster
03:23 nishanth joined #gluster
03:34 zhangjn joined #gluster
03:36 calavera joined #gluster
03:43 mobaer joined #gluster
03:53 atinm joined #gluster
03:53 shubhendu joined #gluster
03:57 sakshi joined #gluster
03:59 haomaiwang joined #gluster
03:59 MACscr|lappy joined #gluster
04:00 nbalacha joined #gluster
04:00 ahino joined #gluster
04:01 haomaiwang joined #gluster
04:01 nickage joined #gluster
04:02 nickage hello, does anyone know why would one brick stay offline?
04:06 plarsen joined #gluster
04:06 sakshi nickage, can you check to see if there is something in the brick logs
04:06 alghost Could you give me log ?
04:07 sakshi nickage, to force start all bricks try, gluster volume <volname> start force
04:08 nickage tried force, didn't help
04:09 sakshi nickage, are there other bricks on that node, is the node up and working?
04:10 nickage node is up, it has other volumes and bricks, they are working
04:10 JoeJulian The usual reason a brick would stay offline is because you didn't have it mounted.
04:11 alghost nickage: If you can, give me glusterd log file
04:11 alghost oh..
04:11 nickage joined #gluster
04:11 nehar joined #gluster
04:12 alghost If you have github account, You can use the gist.github.com to upload logs
04:12 alghost nickage:
04:12 nickage I should have one
04:14 nickage looks like the port on this brick is not the sama as on others in the same volume, is that a problem ?
04:15 RameshN joined #gluster
04:15 sakshi nickage, nope the tcp port would be different for each brick inspite of them being in the same volume
04:15 alghost I think that is not problem
04:20 nickage do i need to restart glusterfsd or just glusterd ? to restart all bricks
04:24 sakshi I think in this case it would be enough to stop and start the volume, that would restart all the bricks
04:25 sakshi nickage, if the brick still does not start check for error in the brick logs, which can be found at /var/log/glusterfs/bricks/<brick_path>
04:27 nickage sakshi, I checked nothing there, it shows regular "Final graph" like everything is ok, can I delete one brick and just recreate it syncing it from working one?
04:27 calavera joined #gluster
04:30 sakshi nickage, just for getting a final solution, could you paste the logs may be to pastebin and share the file, may be we could get something there
04:30 sakshi nickage, also is this brick offline since the volume creation or did it go down sometime later?
04:31 nickage sakshi, it didn't got back to life after os restart
04:33 sakshi nickage, hmm not very sure what can be the problem here
04:33 sakshi atinm, have you come across such an issue discussed here?
04:34 atinm sakshi, could you summarize the issue once as I was not watching the channel?
04:34 sakshi atinm, one of the brick remains offline, volume start force or an os restart is unable to bring the brick online.
04:34 atinm sakshi, what the brick log indicates?
04:35 sakshi atinm, there are other volumes hosted on that node which are working fine
04:35 sakshi atinm, nickage is yet to send the log, but mentions that it shows regular 'final graph'
04:36 atinm nickage, even ps output doesn't show the process for that brick?
04:37 nickage atinm, right, there is process pid in gluster volume status volume_name, but there is no process in ps
04:47 pdrakeweb joined #gluster
04:48 arcolife joined #gluster
04:56 gem joined #gluster
04:57 karthikfff joined #gluster
04:58 pppp joined #gluster
04:58 nbalacha joined #gluster
04:59 ashiq joined #gluster
05:01 haomaiwa_ joined #gluster
05:03 jiffin joined #gluster
05:04 ramky_ joined #gluster
05:04 ppai joined #gluster
05:05 zhangjn joined #gluster
05:05 cpetersen joined #gluster
05:18 anil joined #gluster
05:18 JoeJulian paste the brick log somewhere so we can double check it
05:19 skoduri joined #gluster
05:19 atinm nickage,
05:19 atinm oops
05:19 atinm nickage, problem identified!
05:19 atinm nickage, brick process is crashing
05:19 atinm nickage, seems like a crash from marker translator
05:19 atinm nickage, there should be core files as well
05:19 JoeJulian Ah, so he did post a log somewhere?
05:20 atinm JoeJulian, yes, he shared with me
05:20 JoeJulian Then I'm going to bed. Goodnight all.
05:20 atinm JoeJulian, good night and thanks a lot for all your help in this channel, much appreciated!
05:21 atinm nickage, Could you file a bug under marker translator in 3.5.2 version with sosreport and core file attached to the bug?
05:21 glusterbot https://bugzilla.redhat.com/en​ter_bug.cgi?product=GlusterFS
05:23 poornimag joined #gluster
05:24 Apeksha joined #gluster
05:27 JoeJulian Is it bug 1215550
05:27 glusterbot Bug https://bugzilla.redhat.com:​443/show_bug.cgi?id=1215550 high, unspecified, ---, vmallika, CLOSED CURRENTRELEASE, glusterfsd crashed after directory was removed from the mount point, while self-heal and rebalance were running on the volume
05:28 Bhaskarakiran joined #gluster
05:29 atinm JoeJulian, doesn't look like as the backtraces are different
05:30 atinm JoeJulian, http://pastebin.com/nn630VpT
05:30 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
05:31 JoeJulian There are several marker bugs fixed in release-3.5 though. It might be worth upgrading.
05:31 Manikandan joined #gluster
05:31 JoeJulian In fact, there's one specifically about fill_from_name
05:32 vimal joined #gluster
05:32 ovaistar_ joined #gluster
05:34 nickage atinm, where core file are located ?
05:34 atinm nickage, it dependes on where the core pattern is set
05:35 atinm nickage, cat /proc/sys/kernel/core_pattern
05:36 Manikandan joined #gluster
05:38 hgowtham joined #gluster
05:47 kdhananjay joined #gluster
05:52 hagarth joined #gluster
05:58 ndarshan joined #gluster
05:59 armyriad joined #gluster
06:01 haomaiwang joined #gluster
06:10 vimal joined #gluster
06:11 karnan joined #gluster
06:11 skoduri joined #gluster
06:18 MACscr joined #gluster
06:18 MACscr joined #gluster
06:23 zhangjn joined #gluster
06:28 rafi joined #gluster
06:31 rafi joined #gluster
06:36 atalur joined #gluster
06:43 nishanth joined #gluster
06:45 anil joined #gluster
06:46 nangthang joined #gluster
06:58 zhangjn joined #gluster
06:59 zhangjn joined #gluster
06:59 pranithk joined #gluster
07:00 hos7ein joined #gluster
07:01 haomaiwa_ joined #gluster
07:03 karnan joined #gluster
07:05 pppp joined #gluster
07:05 anrao joined #gluster
07:10 gem joined #gluster
07:14 Saravanakmr joined #gluster
07:19 shubhendu joined #gluster
07:19 kshlm joined #gluster
07:27 bhuddah joined #gluster
07:34 nishanth joined #gluster
07:37 davidhadas joined #gluster
07:37 [Enrico] joined #gluster
07:43 mhulsman joined #gluster
07:45 anrao joined #gluster
07:51 nbalacha joined #gluster
08:01 pppp joined #gluster
08:01 haomaiwa_ joined #gluster
08:02 nickage joined #gluster
08:07 karnan joined #gluster
08:09 zhangjn_ joined #gluster
08:11 zhangjn joined #gluster
08:15 shubhendu joined #gluster
08:17 nishanth joined #gluster
08:31 rafi1 joined #gluster
08:38 frozengeek joined #gluster
08:42 ctria joined #gluster
08:45 inodb joined #gluster
08:53 sakshi joined #gluster
08:57 ahino joined #gluster
09:01 haomaiwa_ joined #gluster
09:03 b0p joined #gluster
09:06 rafi joined #gluster
09:06 muneerse joined #gluster
09:08 anrao joined #gluster
09:11 muneerse joined #gluster
09:21 klaxa joined #gluster
09:21 rafi1 joined #gluster
09:34 harish_ joined #gluster
09:36 nickage joined #gluster
09:37 haomaiwang joined #gluster
09:42 twaddle joined #gluster
09:44 nbalacha joined #gluster
09:52 zhangjn joined #gluster
09:55 kbyrne joined #gluster
10:01 haomaiwa_ joined #gluster
10:05 zhangjn joined #gluster
10:19 gildub joined #gluster
10:24 mobaer joined #gluster
10:24 aravindavk joined #gluster
10:32 Slashman joined #gluster
10:38 mhulsman joined #gluster
10:46 inodb joined #gluster
10:48 mhulsman joined #gluster
10:56 nishanth joined #gluster
10:58 ackjewt joined #gluster
11:00 karnan joined #gluster
11:01 haomaiwang joined #gluster
11:04 Raide joined #gluster
11:07 gowtham joined #gluster
11:14 mhulsman joined #gluster
11:17 ashiq_ joined #gluster
11:17 muneerse joined #gluster
11:23 rafi joined #gluster
11:25 b0p1 joined #gluster
11:27 nbalacha joined #gluster
11:30 karnan joined #gluster
11:32 Raide Hi! I'm running gluster 3.7.6 on Centos 7.2 with ctdb and samba 4.2.4. I use samba vfs to access the gluster volume. The problem is that I got a lot of core dumps of smbd and this error in the logs: "INTERNAL ERROR: Signal 6 in pid xxxx" After that smbd panics and restarts. Anyone seen this before?
11:33 Norky joined #gluster
11:33 rafi joined #gluster
11:34 nishanth joined #gluster
11:37 anoopcs Raide, In which logs? ctdb or smbd?
11:38 Raide anoopcs, its in the message log. nothing in the ctdb or smd logs
11:39 anoopcs Raide, Ok. Anything from glusterfs logs?
11:42 Raide anoopcs no, nothing unusual that I can see
11:42 anoopcs Raide, Do you have a backtrace produced by smbpanic action script.
11:46 Raide anoopcs I haven't installed the debuginfo package, but I can do that.
11:46 Bhaskarakiran joined #gluster
12:01 haomaiwa_ joined #gluster
12:13 Bhaskarakiran joined #gluster
12:15 kotreshhr joined #gluster
12:16 Raide anoopcs Here is a snippet from the message log: http://pastebin.com/cnzXhGsQ
12:16 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
12:25 anrao joined #gluster
12:26 Raide @paste
12:26 glusterbot Raide: For a simple way to paste output, install netcat (if it's not already) and pipe your output like: | nc termbin.com 9999
12:28 kkeithley1 joined #gluster
12:31 ira joined #gluster
12:35 kotreshhr left #gluster
12:49 shubhendu joined #gluster
12:51 chirino_m joined #gluster
12:56 unclemarc joined #gluster
13:01 haomaiwa_ joined #gluster
13:01 nbalacha joined #gluster
13:09 kshlm joined #gluster
13:10 Raide anoopcs I don't have the /usr/share/samba/panic-action so there is no backtrace generated
13:12 anoopcs Raide, Ok. I could see log_buf_destroy in dmesg backtrace.
13:13 anoopcs Raide, Are you sure that there are no errors from glusterfs logs?
13:14 Raide anoopcs hmmm...i will have a look again
13:19 poornimag joined #gluster
13:23 Raide anoopcs I see some errors relating to lock errors, but they do not correlate in time with the smbd panics and they are not so frequent. http://ur1.ca/ofpd8
13:24 Raide anoopcs I will check the samba vfs logs for the clients connection to the server
13:25 B21956 joined #gluster
13:25 Raide anoopcs that log snippet was from the etc-glusterfs-glusterd.vol.log
13:34 anoopcs Raide, First look inside vfs glusterfs logs
13:35 Raide anoopcs found this one of the vfs logs:  0-ch-online-client-1: failed to get the port number for remote subvolume. Please run 'gluster volume status' on server to see if brick process is running.
13:36 anoopcs Raide, What does gluster volume status says?
13:36 anoopcs bricks are up and running or not?
13:37 Raide anoopcs http://ur1.ca/ofpeq
13:37 glusterbot Title: #313201 Fedora Project Pastebin (at ur1.ca)
13:38 twaddle Why is it possible to see the files in the brick path, but not write to them? Servers have to mount back onto themselves?
13:39 anoopcs Raide, volume status looks good.
13:39 Norky it's a very bad idea to write directly to bricks. Yes, if you want to change the data in the volume from the servers, you must mount the volume on the servers
13:39 anoopcs Raide, Are you able use native glusterfs mount
13:40 Raide anoopcs I was thinking of that as well. That will be my next step I think.
13:40 anoopcs Raide, Cool.
13:41 anoopcs Raide, And you cn also look into brick logs for errors.
13:41 anoopcs s/cn/can
13:42 julim joined #gluster
13:43 mobaer joined #gluster
13:58 Bhaskarakiran joined #gluster
13:58 Pupeno joined #gluster
14:01 haomaiwa_ joined #gluster
14:02 rafi joined #gluster
14:20 unclemarc joined #gluster
14:21 rwheeler joined #gluster
14:25 skoduri joined #gluster
14:27 rafi joined #gluster
14:31 shyam joined #gluster
14:36 ira joined #gluster
14:38 rwheeler joined #gluster
14:43 plarsen joined #gluster
14:43 hamiller joined #gluster
14:46 rafi joined #gluster
14:50 rwheeler joined #gluster
14:52 skylar joined #gluster
14:52 pranithk joined #gluster
14:53 plarsen joined #gluster
14:57 ekuric joined #gluster
15:01 haomaiwa_ joined #gluster
15:02 julim joined #gluster
15:03 dlambrig joined #gluster
15:07 kdhananjay joined #gluster
15:12 coredump joined #gluster
15:16 shyam joined #gluster
15:24 skoduri joined #gluster
15:33 cpetersen joined #gluster
15:42 bowhunter joined #gluster
15:45 EinstCrazy joined #gluster
15:48 farhorizon joined #gluster
16:00 baoboa joined #gluster
16:01 jdang joined #gluster
16:01 haomaiwang joined #gluster
16:06 calavera joined #gluster
16:10 hgowtham joined #gluster
16:13 cpetersen joined #gluster
16:14 jiffin joined #gluster
16:22 RameshN joined #gluster
16:26 overclk joined #gluster
16:33 shaunm joined #gluster
16:36 vimal joined #gluster
16:39 Pupeno joined #gluster
16:39 Pupeno joined #gluster
16:55 muneerse2 joined #gluster
17:01 haomaiwang joined #gluster
17:05 bfm joined #gluster
17:05 bfm hi guys! is 'gluster vol rename' command gone from cli? I'm using 3.7.5 and it's not there
17:07 bfm how would I rename a volume in 3.7.5?
17:08 JoeJulian bfm: The help text was added to the source code but it was never actually implemented.
17:08 bfm hmm… let me check the source…
17:08 JoeJulian The function(s) necessary were never written.
17:09 JoeJulian Someone should probably file a bug report because this question is coming up more and more often. About 4 times over the last three months in here.
17:09 glusterbot https://bugzilla.redhat.com/en​ter_bug.cgi?product=GlusterFS
17:10 bfm is it FAQ somewhere that I missed trying to Google it?
17:10 JoeJulian Nope
17:10 ndevos just checked, but there is no bug for it yet - http://bugs.cloud.gluster.org/
17:10 glusterbot Title: Gluster Bugs (at bugs.cloud.gluster.org)
17:11 bfm does that mean that there is no way to rename a volume except for delete/create?
17:11 JoeJulian In fact, the only place it was ever documented was in a man page produced years ago. It was quickly replaced but some man page aggregator has kept it around.
17:12 bfm yes, I saw it in Ubuntu man on their website
17:12 JoeJulian To answer your question, though, about how to do it, yes. Delete and recreate is an option or, if your comfortable doing it, manipulating the tree under /var/lib/glusterd/vols directly.
17:12 shubhendu joined #gluster
17:12 hgowtham joined #gluster
17:12 Philambdo1 joined #gluster
17:14 JoeJulian Nice page, ndevos.
17:14 ndevos it's very helpful to check for existing bugs :)
17:14 JoeJulian Much faster than searching bz...
17:19 ndevos the page only has static contents, and it is generated each (my) night
17:19 ndevos contributions are welcome! see https://github.com/gluster/gluster-bugs-webui
17:19 glusterbot Title: gluster/gluster-bugs-webui · GitHub (at github.com)
17:20 ovaistariq joined #gluster
17:21 bfm JoeJulian: so, I just stop the volume; rename the volume subdirectory under /var/lib/glusterd/vols and change every mentioning of the old volume name in all .vol and brick files there and after that start the volume back with new name?
17:21 JoeJulian That's what I would do.
17:21 bfm probably need to stop glusterd as well
17:21 ndevos bfm: and on all storage servers :)
17:21 gem joined #gluster
17:22 bfm ndevos: yes, sure same on every node
17:23 bfm I was under impression it might have been hidden somewhere deeper :-) I once tried to change brick ports similar way, but the change was always rewritten back by glusterd
17:23 dlambrig joined #gluster
17:24 bfm I'll give it a go :-) thanks, guys!
17:39 shyam joined #gluster
17:57 rafi joined #gluster
18:01 haomaiwa_ joined #gluster
18:02 nishanth joined #gluster
18:09 Rapture joined #gluster
18:19 mobaer joined #gluster
18:21 spardhas left #gluster
18:21 F2Knight joined #gluster
18:28 F2Knight_ joined #gluster
18:29 caveat- joined #gluster
18:39 Pupeno joined #gluster
18:45 dataio joined #gluster
18:46 twi7ch joined #gluster
18:49 twi7ch This is probably an odd question since Gluster is a distributed FS but is it possible to create a volume without any replication?
18:49 samppah_ yes it is
18:56 cliluw joined #gluster
18:57 virusuy joined #gluster
18:58 ashiq_ joined #gluster
19:01 haomaiwa_ joined #gluster
19:06 steved_ joined #gluster
19:07 steved_ Posted to the mailing list, but hoping I can get a quicker response here. I have quota's enabled on a distributed volume, and the quota list usage doesn't match a du
19:07 steved_ using 3.6.6
19:12 ovaistariq joined #gluster
19:16 ahino joined #gluster
19:22 dlambrig joined #gluster
19:23 karnan joined #gluster
19:26 ira joined #gluster
19:32 frozengeek joined #gluster
19:38 JoeJulian steved_: Best I can suggest is to search through the channel logs for quota. It's happened before but, since I have no use for quota, I don't remember what the outcome was.
19:42 dblack joined #gluster
19:45 illogik joined #gluster
19:48 MACscr|lappy joined #gluster
19:50 rwheeler joined #gluster
19:52 papamoose1 joined #gluster
20:01 haomaiwa_ joined #gluster
20:04 hagarth joined #gluster
20:08 chirino joined #gluster
20:11 gildub joined #gluster
20:13 ovaistariq joined #gluster
20:18 F2Knight joined #gluster
20:25 bowhunter joined #gluster
20:43 dgbaley joined #gluster
20:52 theron joined #gluster
20:55 hagarth joined #gluster
21:01 haomaiwa_ joined #gluster
21:10 mobaer joined #gluster
21:14 ovaistariq joined #gluster
21:16 calavera joined #gluster
21:16 ctria joined #gluster
21:30 ahino joined #gluster
21:30 farhorizon joined #gluster
21:33 volga629 joined #gluster
21:34 volga629 Hello Everyone, having issue when creating virtual disk with qemu
21:34 volga629 Formatting 'gluster://ns520086.ip-158-69-116.​net/datapoint02/canldbn01.qcow2', fmt=qcow2 size=21474836480 encryption=off cluster_size=65536 lazy_refcounts=off refcount_bits=16
21:34 volga629 [2016-01-21 21:31:27.312280] E [MSGID: 108006] [afr-common.c:3880:afr_notify] 0-datapoint02-replicate-0: All subvolumes are down. Going offline until atleast one of them comes back up.
21:35 JoeJulian volga629: ,,(paste) the rest of the log to someplace so I can see it. That error is simply the final result.
21:35 glusterbot volga629: For a simple way to paste output, install netcat (if it's not already) and pipe your output like: | nc termbin.com 9999
21:37 volga629 http://fpaste.org/313444/12216145/
21:37 glusterbot Title: #313444 Fedora Project Pastebin (at fpaste.org)
21:38 JoeJulian Oh, I didn't know it dumped logs to stdout. I wonder if there's more detail in /var/log/glusterfs somewhere.
21:39 volga629 let me see
21:41 hagarth volga629: that message can be ignored. It happens upon termination of a glusterfs object used by qemu-img for virtual disk creation.
21:42 volga629 last one of my admins added vm and it brought down volume
21:43 hagarth volga629: do you have more details on how the volume was brought down?
21:43 volga629 tried create  virtual disk and exactly same message
21:44 volga629 the problem that no information in log
21:45 hagarth volga629: if gluster volume status reflects that the servers/bricks are running, you can ignore this message as qemu-img is a client for gluster.
21:50 JoeJulian unless, of course, it doesn't work in which case I'd start checking firewalls and hostname resolution.
21:51 hagarth volga629: +1 to what JoeJulian mentions
21:52 volga629 I am allowing on ip based in firewall
21:53 volga629 hostname resolution each node use same dns server to avoid the issues
22:01 haomaiwa_ joined #gluster
22:05 frozengeek joined #gluster
22:14 misc joined #gluster
22:31 volga629 I created another disk and I so in log http://fpaste.org/313470/14534154/
22:31 glusterbot Title: #313470 Fedora Project Pastebin (at fpaste.org)
22:36 cyberbootje joined #gluster
22:48 mobaer joined #gluster
22:53 farhorizon joined #gluster
23:01 papamoose joined #gluster
23:01 haomaiwa_ joined #gluster
23:30 ctria joined #gluster
23:32 dlambrig joined #gluster
23:47 farhorizon joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary