Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2016-01-18

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:01 haomaiwa_ joined #gluster
00:02 julim joined #gluster
00:37 sankarshan joined #gluster
01:04 zhangjn joined #gluster
01:15 plarsen joined #gluster
01:19 EinstCrazy joined #gluster
01:23 nangthang joined #gluster
01:39 Lee1092 joined #gluster
01:41 haomaiwa_ joined #gluster
02:01 haomaiwang joined #gluster
02:33 plarsen joined #gluster
02:45 zhangjn_ joined #gluster
02:56 zhangjn joined #gluster
02:59 zhangjn joined #gluster
03:01 volga629 joined #gluster
03:01 haomaiwa_ joined #gluster
03:03 alghost joined #gluster
03:04 vmallika joined #gluster
03:05 gem joined #gluster
03:22 bharata-rao joined #gluster
03:23 EinstCrazy joined #gluster
03:24 sakshi joined #gluster
03:24 EinstCrazy joined #gluster
03:28 zhangjn joined #gluster
03:29 EinstCra_ joined #gluster
03:30 EinstCrazy joined #gluster
03:32 EinstCrazy joined #gluster
03:35 EinstCrazy joined #gluster
03:41 EinstCrazy joined #gluster
03:43 EinstCrazy joined #gluster
03:44 EinstCrazy joined #gluster
03:45 EinstCra_ joined #gluster
03:46 nbalacha joined #gluster
03:48 shubhendu joined #gluster
04:01 haomaiwa_ joined #gluster
04:02 itisravi joined #gluster
04:07 zhangjn joined #gluster
04:07 hgowtham joined #gluster
04:08 kshlm joined #gluster
04:10 ramteid joined #gluster
04:12 d0nn1e joined #gluster
04:14 nbalacha joined #gluster
04:16 harish joined #gluster
04:28 EinstCrazy joined #gluster
04:28 karthik_ joined #gluster
04:31 kanagaraj joined #gluster
04:41 RameshN__ joined #gluster
04:43 rafi joined #gluster
04:55 raghug joined #gluster
04:56 nehar joined #gluster
05:01 haomaiwang joined #gluster
05:04 kotreshhr joined #gluster
05:10 RameshN__ joined #gluster
05:11 Manikandan joined #gluster
05:14 nehar joined #gluster
05:14 zhangjn joined #gluster
05:18 anil joined #gluster
05:19 pppp joined #gluster
05:20 zhangjn joined #gluster
05:22 skoduri joined #gluster
05:23 vmallika joined #gluster
05:25 ashiq joined #gluster
05:31 overclk joined #gluster
05:34 kotreshhr joined #gluster
05:38 Bhaskarakiran joined #gluster
05:40 ndarshan joined #gluster
05:48 zhangjn joined #gluster
05:57 atalur joined #gluster
05:57 poornimag joined #gluster
06:01 haomaiwa_ joined #gluster
06:06 zhangjn joined #gluster
06:09 nishanth joined #gluster
06:10 mhulsman joined #gluster
06:11 hgowtham joined #gluster
06:12 nangthang joined #gluster
06:13 vimal joined #gluster
06:17 mhulsman joined #gluster
06:27 hos7ein joined #gluster
06:31 nehar joined #gluster
06:32 Saravana_ joined #gluster
06:49 EinstCra_ joined #gluster
06:50 Bhaskarakiran joined #gluster
06:57 gem joined #gluster
06:59 Debloper joined #gluster
07:00 zhangjn_ joined #gluster
07:01 haomaiwa_ joined #gluster
07:04 nangthang joined #gluster
07:04 raghug joined #gluster
07:06 mhulsman joined #gluster
07:09 mhulsman1 joined #gluster
07:10 harish_ joined #gluster
07:15 mbukatov joined #gluster
07:16 Bhaskarakiran joined #gluster
07:20 EinstCrazy joined #gluster
07:24 kshlm joined #gluster
07:28 Bhaskarakiran_ joined #gluster
07:30 jtux joined #gluster
07:33 gem joined #gluster
07:36 kovshenin joined #gluster
07:37 ramky joined #gluster
07:48 mhulsman joined #gluster
07:48 kovshenin joined #gluster
07:52 EinstCrazy joined #gluster
07:54 kdhananjay joined #gluster
07:55 skoduri joined #gluster
08:01 haomaiwa_ joined #gluster
08:04 mobaer joined #gluster
08:06 MACscr is there a way to see heal progress? i dont feel like its properly healing
08:06 MACscr should heal info be the same output on both nodes in a two node cluster?
08:06 zhangjn joined #gluster
08:07 MACscr right now all i feel like im getting is high cpu usage on node 1 and thats about it
08:13 [Enrico] joined #gluster
08:14 ivan_rossi joined #gluster
08:21 kanagaraj joined #gluster
08:22 niram left #gluster
08:23 MACscr i bet JoeJulian knows =P
08:29 itisravi MACscr: What version of gluster are you running? does the heal info output show "Possibly undergoing heal" ?
08:29 MACscr itisravi: yes they do
08:29 MACscr 3.7.x
08:30 itisravi MACscr: Ok, that means the file is undergoing heal.
08:30 MACscr but no way of knowing actual progress and something isnt hanging?
08:31 itisravi As of today there is no way to know the % of progress..
08:31 MACscr itisravi: but why am i getting very little cpu activity on node2, but 700% on node 1?
08:32 itisravi MACscr: likely due to the checksumming done on the data blocks. Is node1 the 'source' ?
08:33 MACscr yes
08:34 deniszh joined #gluster
08:34 MACscr itisravi: lastly, it says nfs isnt running on node1. how should i properly start it without screwing anything up?
08:35 itisravi MACscr: If you are using FUSE mounts, it shouldn't matter. After the heal is complete, you could try 'service glusterd restart' or 'gluster vol start volname force'
08:36 gildub joined #gluster
08:36 itisravi That should bring up the gluster nfs server too.
08:37 MACscr i can resolve the nfs issue until they heal?
08:37 spalai joined #gluster
08:38 itisravi MACscr: The selfheal daemon is also restarted if you run those commands.
08:39 MACscr nfs aside, I have a two node cluster setup with iscsi using the image files that are stored on the gluster cluster as LUNs. Should i really be mounting the gluster volumes on each gluster node for iscsi access or should i be accessing /var/gluster-storage directly?
08:39 itisravi Since the heal is going on, you might wait until its over.
08:39 itisravi not sure about that.
08:40 MACscr If i only have about 72GB of files stored in gluster, why is each gluster host about 155GB? Are their duplicates stored somewhere and why?
08:48 R0ok_ joined #gluster
08:54 MACscr itisravi: im starting to question my actual gluster version, im seeing the following in my glfsheal log Using Program GlusterFS 3.3
08:55 rafi joined #gluster
08:55 itisravi MACscr: that's the rpc version. gluster --version should give you the version.
08:55 MACscr glusterfs 3.7.6 built on Nov  9 2015 15:17:09
08:56 nehar joined #gluster
08:56 raghug joined #gluster
08:56 MACscr [2016-01-18 08:51:01.486646] W [MSGID: 101012] [common-utils.c:2776:gf_get_reserved_ports] 0-glusterfs: could not open the file /proc/sys/net/ipv4/ip_local_reserved_ports for getting reserved ports info [No such file or directory]
08:56 MACscr [2016-01-18 08:51:01.486679] W [MSGID: 101081] [common-utils.c:2810:gf_process_reserved_ports] 0-glusterfs: Not able to get reserved ports, hence there is a possibility
08:56 MACscr that glusterfs may consume reserved port
08:57 MACscr wonder how worried i should be about those two in my heal log as well
08:58 karnan joined #gluster
08:59 malevolent joined #gluster
08:59 Manikandan joined #gluster
09:01 mhulsman joined #gluster
09:01 arcolife joined #gluster
09:01 xavih joined #gluster
09:01 18WABTL2G joined #gluster
09:07 malevolent joined #gluster
09:07 b0p joined #gluster
09:08 rafi1 joined #gluster
09:09 xavih joined #gluster
09:24 glafouille joined #gluster
09:25 abyss^ JoeJulian: I'd like to move from debian 6 to debian 8. Because of debian 6 I have version 3.3.1 of gluster. Now I'd like to rsync /var/lib/glusterfs to new machines, install on debian 8 glusterfs version 3.3.1 then detach (I have virtualization) discs from old (debian 6) OS and attach to new debian 8. After that I should have old glusterfs on new OS, yes? It is possible? That should work?
09:26 abyss^ Then I would easily upgrade to newest glusterfs...
09:26 Slashman joined #gluster
09:36 Saravana_ joined #gluster
09:37 Sunghost joined #gluster
09:42 rafi joined #gluster
09:43 Bhaskarakiran joined #gluster
09:44 xavih joined #gluster
09:48 Pupeno joined #gluster
09:48 Pupeno joined #gluster
09:48 glafouille joined #gluster
09:56 Sunghost Hello, i have on 1 node problems with glusterfs task hang while copying or moving files. i read it could be while low on memory. i have 11GB Mem and 18TB disk.
09:56 Sunghost error: Call Trace:
09:56 Sunghost [ 1800.603286]  [<ffffffff81512f15>] ? rwsem_down_read_failed+0xf5/0x130
09:56 Sunghost [ 1800.603294]  [<ffffffff812b7ce4>] ? call_rwsem_down_read_failed+0x14/0x30
09:57 Sunghost i read on net, that it could be while system went out of memory. the problem occures if i copy 40gb file on distributed 3.7.6 over nfs
09:57 Sunghost any idea
10:01 ashiq joined #gluster
10:01 haomaiwa_ joined #gluster
10:02 Saravana_ joined #gluster
10:02 spalai1 joined #gluster
10:04 rafi joined #gluster
10:09 glafouille joined #gluster
10:10 zhangjn joined #gluster
10:12 Sunghost is there a calculation for needed memory and size of volume? any volume settings for low memory?
10:14 rafi joined #gluster
10:17 shubhendu joined #gluster
10:26 hos7ein_ joined #gluster
10:33 nisroc joined #gluster
10:36 Guest64461 joined #gluster
10:37 haomaiwa_ joined #gluster
10:41 hgowtham joined #gluster
10:52 rafi joined #gluster
10:52 Akee joined #gluster
10:57 glafouille joined #gluster
11:08 JesperA joined #gluster
11:09 spalai joined #gluster
11:13 pasqd joined #gluster
11:20 poornimag joined #gluster
11:28 karnan joined #gluster
11:33 karnan joined #gluster
11:37 karnan joined #gluster
11:43 rafi1 joined #gluster
11:46 rafi joined #gluster
11:49 Bhaskarakiran joined #gluster
11:50 rafi1 joined #gluster
11:58 bluenemo joined #gluster
11:59 shubhendu joined #gluster
12:03 rafi joined #gluster
12:14 haomaiw__ joined #gluster
12:16 haomaiwang joined #gluster
12:21 spalai left #gluster
12:37 nbalacha joined #gluster
12:39 atinm joined #gluster
12:52 mobaer joined #gluster
12:55 plarsen joined #gluster
13:03 zhangjn joined #gluster
13:03 hos7ein_ joined #gluster
13:09 ahino joined #gluster
13:12 deniszh joined #gluster
13:14 karthik_ left #gluster
13:27 b0p joined #gluster
13:32 zhangjn joined #gluster
13:36 mobaer joined #gluster
13:40 anil joined #gluster
13:46 haomaiwa_ joined #gluster
13:49 unclemarc joined #gluster
13:53 zhangjn_ joined #gluster
13:55 Manikandan joined #gluster
14:02 zhangjn joined #gluster
14:04 plarsen joined #gluster
14:05 chirino_m joined #gluster
14:12 Manikandan joined #gluster
14:13 ahino joined #gluster
14:14 atinm joined #gluster
14:16 nbalacha joined #gluster
14:23 karnan joined #gluster
14:26 julim joined #gluster
14:28 spalai joined #gluster
14:28 spalai left #gluster
14:30 TheBallPI can I migrate from a replica 2 to a replica 3?
14:31 TheBallPI oh, too easy: gluster volume add-brick volname replica 3 new-brick
14:33 nehar joined #gluster
14:37 kdhananjay joined #gluster
14:38 rwheeler joined #gluster
14:42 dgandhi joined #gluster
14:43 dgandhi joined #gluster
14:44 dgandhi joined #gluster
14:44 hamiller joined #gluster
14:45 shyam joined #gluster
14:46 B21956 joined #gluster
14:52 chirino joined #gluster
14:58 cholcombe joined #gluster
15:02 anti[Enrico] joined #gluster
15:08 harish_ joined #gluster
15:13 inodb joined #gluster
15:16 vimal joined #gluster
15:28 squizzi_ joined #gluster
15:29 jdang joined #gluster
15:30 tswartz joined #gluster
15:30 jdang joined #gluster
15:31 chirino joined #gluster
15:31 kotreshhr left #gluster
15:33 farhorizon joined #gluster
15:40 monotek1 joined #gluster
15:41 Apeksha joined #gluster
15:42 neofob joined #gluster
15:54 Saravana_ joined #gluster
15:55 chirino joined #gluster
15:56 bowhunte_ joined #gluster
15:57 mobaer joined #gluster
16:01 RameshN joined #gluster
16:04 bennyturns joined #gluster
16:05 haomaiwang joined #gluster
16:10 bowhunte_ joined #gluster
16:13 JesperA joined #gluster
16:19 18VAAO2LE joined #gluster
16:29 shaunm joined #gluster
16:30 RameshN_ joined #gluster
16:32 mobaer joined #gluster
16:51 haomaiwa_ joined #gluster
16:53 frozengeek joined #gluster
16:57 julim joined #gluster
17:01 F2Knight joined #gluster
17:22 vmallika joined #gluster
17:25 Rapture joined #gluster
17:45 mobaer joined #gluster
17:52 lpabon joined #gluster
18:00 ivan_rossi left #gluster
18:02 neofob joined #gluster
18:07 monotek joined #gluster
18:20 virusuy hi guys, i´ve updated my gluster from 3.6 to 3.7 and when i run quota enabled i get the following message : Volume quota failed. The cluster is operating at version 30600. Quota command enable is unavailable in this version.
18:20 semiautomatic joined #gluster
18:22 JoeJulian @op-version
18:22 glusterbot JoeJulian: The operating version represents the RPC and translator capabilities required to accommodate the volume settings ( http://gluster.org/community/documentation/index.php/OperatingVersions ). To allow older version clients to connect to newer servers, reset any volume options that require the newer op-version.
18:23 JoeJulian Huh... I thought there was more information in that factoid.
18:23 tswartz on that note. i'm interested in downgrading from 3.7 to whatever stable is. am i going to have any issues?
18:23 tswartz well, foreseen issues anyways
18:24 JoeJulian gluster volume set all cluster.op-version 30704
18:24 JoeJulian tswartz: 3.7 is stable.
18:25 virusuy should i run that command JoeJulian ?
18:25 JoeJulian virusuy: If your clients have all been upgraded, yes.
18:25 tswartz JoeJulian, thanks
18:26 virusuy JoeJulian: by clients do you mean nodes?
18:26 JoeJulian @glossary
18:26 glusterbot JoeJulian: A "server" hosts "bricks" (ie. server1:/foo) which belong to a "volume"  which is accessed from a "client"  . The "master" geosynchronizes a "volume" to a "slave" (ie. remote1:/data/foo).
18:27 virusuy JoeJulian: Ok
18:27 semiautomatic joined #gluster
18:27 JoeJulian tswartz: With modern development processes that include copious CI testing, there's not really "stable" and "development" branches anymore.
18:49 DV joined #gluster
18:51 DV joined #gluster
18:52 virusuy JoeJulian: that worked, thanks !
18:56 monotek1 joined #gluster
18:59 d0nn1e joined #gluster
19:03 haomaiwa_ joined #gluster
19:04 mobaer joined #gluster
19:28 chirino joined #gluster
19:30 Guest_ joined #gluster
19:34 mhulsman joined #gluster
19:45 MACscr if i have a two node gluster cluster and i want to use iscsi from it, do i need to mount the files as a client on the gluster nodes themselves or can i serve directly from /var/gluster-storage?
19:46 JoeJulian You cannot serve directly from a brick, if that's what you're asking.
19:46 MACscr and if my gluster cluster is only serving about 72GB of files, why is does it seem to be about 155gb work of data on that system?
19:46 MACscr JoeJulian: figured that was the case, just wanted to make sure
19:46 JoeJulian By what measure?
19:47 MACscr JoeJulian: im using to linux containers and they are only used for gluster/iscsi and i decided to move one to a new array and it was 155gb of files
19:48 JoeJulian I suspect when you moved it you didn't maintain the hardlinks.
19:49 MACscr grr, now that i think of it, the mv failed (i used the mv command and lost ssh access, stupid me), so i had to restore through r1soft, which does block level backups and thats how much data it said it restored
19:50 dlambrig left #gluster
19:52 deniszh joined #gluster
19:56 JoeJulian So the files under .glusterfs should be hardlinks. I think if it were me, I would ensure the volume was healthy, then I would delete the .glusterfs tree on the one that's showing 155gb and do a full heal.
19:58 abyss^ JoeJulian: sorry, I ask the same question - maybe you haven't seen...
19:58 abyss^ JoeJulian: I'd like to move from debian 6 to debian 8. Because of debian 6 I have version 3.3.1 of gluster. Now I'd like to rsync /var/lib/glusterfs to new machines, install on debian 8 glusterfs version 3.3.1 then detach (I have virtualization) discs from old (debian 6) OS and attach to new debian 8. After that I should have old glusterfs on new OS, yes? It is possible? That should work?
19:59 abyss^ Then I would easily upgrade to newest glusterfs...
19:59 JoeJulian Seems reasonable to me.
19:59 abyss^ :)
20:00 abyss^ ofcourse I will check this and test but, you know... Maybe you know something that I should know:) Thank you:)
20:01 JoeJulian I don't know debian all that well.
20:02 abyss^ yeah, but you can change debian to any other distro (I think;)) :)
20:02 abyss^ the main idea is that if I detach and attach disks to completly different VM then it will still work:)
20:03 abyss^ (and move only configs from old OS)
20:03 JoeJulian Yep
20:03 abyss^ thank you
20:04 JoeJulian The only thing I would do differently, is that since this involves down time, I would probably upgrade the servers simultaneously.
20:05 abyss^ unfortunately there's no > 3.3.1 package of gluster on debian 6 :(
20:05 JoeJulian As an aside... I'm surprised the official cause of death hasn't yet been reported for Ian Murdock.
20:06 lpabon joined #gluster
20:07 JoeJulian Yeah, I would move the bricks and state directories, install the version you want to use on the deb8 servers, and "glusterd --xlator-option *.upgrade=on -N" on the new servers. Then just start the new gluster servers on the new version.
20:08 abyss^ yeah, strange (about Ian Murdock)
20:09 abyss^ oh, ok so I didn't know that is possible - I expected that new gluster do something with old configs or so... That why I thought that move to the same version and then upgrade is better option.
20:10 abyss^ So I will check with new version without upgrading
20:11 deniszh joined #gluster
20:12 shubhendu joined #gluster
20:16 JoeJulian Well... it would still be upgrading. That's what that glusterd command does is upgrade the state files.
20:17 abyss^ yeah, right I read my sentence again and now I see it didn't make sense;)
20:17 abyss^ Thank you.
20:31 gildub joined #gluster
20:43 chirino joined #gluster
21:01 PaulCuzner joined #gluster
21:04 haomaiwa_ joined #gluster
21:05 Guest__ joined #gluster
21:33 mobaer joined #gluster
21:34 misc joined #gluster
21:35 b0p joined #gluster
21:49 F2Knight joined #gluster
22:20 haomaiwa_ joined #gluster
22:21 F2Knight joined #gluster
22:28 misc joined #gluster
22:33 rcampbel3 joined #gluster
22:33 rcampbel3 Hi I'm having a split brain issues - anyway available to help?
22:34 MACscr rcampbel3: have you read the split brain troubleshooitng page?
22:35 rcampbel3 which one is the best one to start with?
22:35 rcampbel3 I'm reading a few that showed up in google.
22:41 coredump joined #gluster
22:46 inodb joined #gluster
22:54 rcampbel3 ok here's my split brain info - http://ur1.ca/ofetp
22:59 gildub joined #gluster
23:04 rcampbel3 I really could use some help with split brain - http://ur1.ca/ofetp
23:04 glusterbot Title: #312158 Fedora Project Pastebin (at ur1.ca)
23:16 Guest__ joined #gluster
23:18 tertiary joined #gluster
23:20 tertiary how would i go about diagnosing an issue with `volume create`that doesn't provide a meaningful exception? Meaning, `gluster volume create ...` returns `volume create: myvol: failed`
23:22 haomaiwa_ joined #gluster
23:39 F2Knight joined #gluster
23:45 plarsen joined #gluster
23:52 davidbitton joined #gluster
23:57 gildub joined #gluster
23:58 inodb joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary