Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2017-02-21

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:07 atm joined #gluster
00:10 IRCFrEAK joined #gluster
00:10 IRCFrEAK left #gluster
00:27 cyberbootje1 anybody experience with gluster + zfs + kvm ? For some reason the disk within the VM goes "away" while trying to format it
00:49 cliluw joined #gluster
00:58 baber joined #gluster
00:58 jdossey joined #gluster
01:03 atm joined #gluster
01:04 mhulsman joined #gluster
01:08 shdeng joined #gluster
01:22 nishanth joined #gluster
01:37 shdeng joined #gluster
01:56 shdeng joined #gluster
02:30 kramdoss_ joined #gluster
02:36 aravindavk joined #gluster
02:38 cyberbootje1 log files of gluster tell me the following:
02:38 cyberbootje1 [posix.c:2289:posix_open] 0-glustertest-posix: open on /gtest1_vol1/gtest/disk1.qcow2: Invalid argument
02:38 cyberbootje1 [server-rpc-fops.c:1535:server_open_cbk] 0-glustertest-server: 16: OPEN /disk1.qcow2 (50559480-1928-4e65-99b1-8b7937bc7af3) ==> (Invalid argument)
02:38 cyberbootje1 i assume something is wrong with posix?
03:01 derjohn_mob joined #gluster
03:02 Gambit15 joined #gluster
03:05 buvanesh_kumar joined #gluster
03:09 atm joined #gluster
03:22 plarsen joined #gluster
03:43 atinm joined #gluster
03:56 Humble joined #gluster
03:57 magrawal joined #gluster
04:02 gyadav joined #gluster
04:03 nishanth joined #gluster
04:05 itisravi joined #gluster
04:22 buvanesh_kumar joined #gluster
04:27 martinetd joined #gluster
04:29 atm joined #gluster
04:30 kdhananjay joined #gluster
04:32 hgowtham joined #gluster
04:33 kramdoss_ joined #gluster
04:38 Prasad joined #gluster
04:42 RameshN joined #gluster
04:50 jiffin joined #gluster
04:55 ndarshan joined #gluster
04:56 nbalacha joined #gluster
04:56 jiffin joined #gluster
05:07 rafi joined #gluster
05:08 sanoj joined #gluster
05:08 skumar joined #gluster
05:14 msvbhat joined #gluster
05:15 sanoj joined #gluster
05:16 karthik_us joined #gluster
05:17 buvanesh_kumar joined #gluster
05:18 nbalacha joined #gluster
05:26 sona joined #gluster
05:29 Shu6h3ndu joined #gluster
05:29 ankitr joined #gluster
05:30 prasanth joined #gluster
05:30 itisravi joined #gluster
05:35 aravindavk joined #gluster
05:36 ppai joined #gluster
05:36 nbalacha joined #gluster
05:39 apandey joined #gluster
05:40 ndarshan joined #gluster
05:42 skoduri joined #gluster
05:44 msvbhat joined #gluster
05:45 mb_ joined #gluster
05:46 riyas joined #gluster
05:47 karthik_us joined #gluster
05:51 Philambdo joined #gluster
05:51 Karan joined #gluster
05:54 Humble joined #gluster
05:56 Saravanakmr joined #gluster
06:03 kotreshhr joined #gluster
06:03 ndarshan joined #gluster
06:06 rastar joined #gluster
06:20 rjoseph joined #gluster
06:27 itisravi_ joined #gluster
06:36 apandey joined #gluster
06:41 karthik_us joined #gluster
06:48 d0nn1e joined #gluster
06:50 Jacob843 joined #gluster
07:09 [diablo] joined #gluster
07:09 Shu6h3ndu_ joined #gluster
07:11 jkroon joined #gluster
07:15 msvbhat joined #gluster
07:23 rastar joined #gluster
07:24 Shu6h3ndu joined #gluster
07:28 mhulsman joined #gluster
07:29 k4n0 joined #gluster
07:32 jtux joined #gluster
07:36 shutupsquare joined #gluster
07:38 ppai joined #gluster
07:40 jtux joined #gluster
07:51 unlaudable joined #gluster
07:56 hgowtham joined #gluster
08:04 ashiq joined #gluster
08:07 arpu joined #gluster
08:11 rastar joined #gluster
08:14 sanoj joined #gluster
08:14 hybrid512 joined #gluster
08:25 fsimonce joined #gluster
08:32 nh2 joined #gluster
08:38 mbukatov joined #gluster
08:49 RameshN joined #gluster
08:50 BatS9 joined #gluster
08:51 chawlanikhil24 joined #gluster
08:52 itisravi_ joined #gluster
08:54 ahino joined #gluster
08:57 nishanth joined #gluster
08:57 hgowtham joined #gluster
08:58 chawlanikhil24 ppai, Following commands I ran:
08:59 chawlanikhil24 gluster peer probe <Ip address>
08:59 chawlanikhil24 on RHEL over server , I had my gluster-installed
09:00 unlaudable joined #gluster
09:00 ahino joined #gluster
09:02 ppai chawlanikhil24, can you post the output over fpaste ?
09:02 chawlanikhil24 sure , doing it
09:02 ppai chawlanikhil24, also take look at the logs in /var/log/glusterfs/
09:02 ndevos chawlanikhil24: maybe firewall issue? and yes, ,,(paste) the logs somewhere
09:02 glusterbot chawlanikhil24: For a simple way to paste output, install netcat (if it's not already) and pipe your output like: | nc termbin.com 9999
09:03 Norky joined #gluster
09:05 chawlanikhil24 http://pastebin.com/BS7ArnKc
09:05 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
09:05 chawlanikhil24 glusterd logs
09:05 Klas joined #gluster
09:06 Klas small question, does glusterfs support IPv6?
09:06 Klas the answer, in general, on the internet, seems to be "yes, but don't trust it" =P
09:06 chawlanikhil24 gluster> peer probe 184.72.117.74
09:06 chawlanikhil24 the command I ran to create a brick
09:07 TvL2386 joined #gluster
09:07 chawlanikhil24 for 10-15 mins there is no response over this!...
09:07 ppai chawlanikhil24, Can you check if you have glusterd already running ? Also, please check if rpcbind service is up. I also suspect your firewall is not allowing connections on 24007
09:09 chawlanikhil24 You guessed it right, just initiated glusterd
09:11 chawlanikhil24 rpcbind was not installed on my OS, just installed it and its running
09:11 chawlanikhil24 and my firwall is already disabled
09:13 chawlanikhil24 ppai, on RHEL server, I installed gluster via, yum install gluster* , on running glusterd,it responses no such command
09:13 kshlm chawlanikhil24, On RHEL only the glusterfs-client packages are shipped.
09:13 ppai chawlanikhil24, what document are you looking at for the instructions ?
09:14 kshlm To get the glusterfs-server packages, which has glusterd, you need a subscription to Red Hat Gluster Storage.
09:14 shutupsquare joined #gluster
09:14 kshlm Alternatively you could use the community provided EL packages built by the CentOS Storage SIG
09:15 chawlanikhil24 ppai, for installing gluster on RHEL , followed no docs
09:15 kshlm https://wiki.centos.org/SpecialInterestGroup/Storage/gluster-Quickstart
09:15 glusterbot Title: SpecialInterestGroup/Storage/gluster-Quickstart - CentOS Wiki (at wiki.centos.org)
09:15 chawlanikhil24 kshlm ,should I git pull the open source code and build there? Will it be fine?
09:15 RameshN joined #gluster
09:16 ndevos ppai, chawlanikhil24: you dont need rpcbind for glusterd, only if you plan to use nfs it is needed
09:16 kshlm chawlanikhil24, If you want to just run glusterfs, you can use the packages from CentOS Storage SIG.
09:17 chawlanikhil24 kshlm, is there any docs which I can follow?
09:17 ppai chawlanikhil24, As kshlm mentioned, you could use centos and the RPMs (packages) provided there. I'd recommend you use the RPMs and then move on to building from gluster source.
09:18 derjohn_mob joined #gluster
09:18 chawlanikhil24 ppai, so should I deploy a new AWS CentOS server?
09:19 chawlanikhil24 and start from scratch there
09:19 ppai chawlanikhil24, you do not need AWS per se. You could as well use any VM for this.
09:20 shutupsquare Hi, Im trying to troubleshoot massive CPU load on my two node cluster.  I have ran volume profile and have the output [here](http://paste.ubuntu.com/24037076/) was wondering if anyone wise can take alook, I can see that i'm getting hammered with LOOKUP, How should I start to diagnose this? Thanks
09:20 glusterbot Title: Ubuntu Pastebin (at paste.ubuntu.com)
09:20 chawlanikhil24 ppai, VM specifically CentOS right?
09:20 ppai chawlanikhil24, Sure
09:21 chawlanikhil24 ppai, thanks
09:21 BatS9 Klas: as far as I have tested IPv6 should work on current versions, I've had issues with running mixed environments tho
09:22 Klas ipv4 AND ipv6 you mean?
09:22 BatS9 yes
09:23 ShwethaHP joined #gluster
09:26 Shu6h3ndu joined #gluster
09:27 RameshN joined #gluster
09:31 rjoseph joined #gluster
09:34 sona joined #gluster
09:35 itisravi_ joined #gluster
09:47 Wizek__ joined #gluster
10:02 shutupsq_ joined #gluster
10:02 apandey joined #gluster
10:02 skoduri_ joined #gluster
10:03 shutupsquare joined #gluster
10:07 ahino joined #gluster
10:08 hgowtham joined #gluster
10:09 aravindavk joined #gluster
10:11 cloph Hi * - having problems with geo-replication and symlinks - "[2017-02-21 10:11:04.508497] E [gfid-access.c:225:ga_newfile_parse_args] 0-gfid-access-autoload: gfid: 318bb32a-8989-4996-97f4-5dbfd82258a4. Invalid length" on the slave, for 31/8b/318bb32a-8989-4996-97f4-5dbfd82258a4 -> /rsyncd-munged//tmp/ssh-HFandJAC76/agent.5522 on master
10:12 apandey joined #gluster
10:13 cloph geo-rep session then goes into faulty mode with http://paste.ubuntu.com/24039428/ on master's log
10:13 glusterbot Title: Ubuntu Pastebin (at paste.ubuntu.com)
10:14 cloph master is a backup volume, where rsnapshot based backups are stored. In other words lots of rotation (daily.x → daily.x+1), those dirs having lots of hardlinks..
10:18 chawlanikhil24 exit
10:19 saintpablo joined #gluster
10:19 Seth_Karlo joined #gluster
10:22 msvbhat joined #gluster
10:25 ahino joined #gluster
10:26 kotreshhr joined #gluster
10:27 Seth_Kar_ joined #gluster
10:27 kotreshhr1 joined #gluster
10:30 msvbhat joined #gluster
10:37 shutupsq_ joined #gluster
10:41 rjoseph joined #gluster
10:50 jkroon joined #gluster
10:57 humblec joined #gluster
11:02 kotreshhr1 left #gluster
11:05 shutupsquare joined #gluster
11:12 TZaman joined #gluster
11:13 TZaman joined #gluster
11:14 Karan joined #gluster
11:28 Klas BatS9: ok, thanks
11:37 ahino joined #gluster
11:40 msvbhat joined #gluster
11:46 pjrebollo joined #gluster
12:03 rastar joined #gluster
12:04 pjrebollo joined #gluster
12:06 Philambdo joined #gluster
12:10 jkroon joined #gluster
12:11 pjrebollo joined #gluster
12:20 rjoseph joined #gluster
12:26 msvbhat joined #gluster
12:31 rastar kshlm: atinm why does xml output of volume info show distribute count which is +1 of actual?
12:31 rastar kshlm: for a 2x3 volume I have, xml output has this line
12:31 rastar kshlm:   <brickCount>6</brickCount>
12:32 rastar <distCount>3</distCount>
12:32 rastar <stripeCount>1</stripeCount>
12:32 rastar <replicaCount>3</replicaCount>
12:32 rastar ashiq: ^^^
12:34 gyadav joined #gluster
12:35 logan- joined #gluster
12:37 ppai joined #gluster
12:38 ashiq rastar, I think dist count and replica count are same
12:41 ashiq rastar, I check few more volume looks like I am getting the same value for both
12:42 BatS9 3.10 release still planned for today or moving it forward?
12:49 jiffin1 joined #gluster
12:53 fcoelho joined #gluster
12:53 panina joined #gluster
13:00 kshlm rastar, It's possible the calculation is wrong.
13:00 kshlm We try to figure out the dist count from the other other counts.
13:01 kshlm Dist count is never stored or available from volinfo.
13:01 ahino1 joined #gluster
13:04 bfoster joined #gluster
13:04 rastar kshlm: thanks, I will recalculate it using other info
13:05 rwheeler joined #gluster
13:11 bfoster joined #gluster
13:12 rwheeler joined #gluster
13:16 humblec joined #gluster
13:18 skoduri joined #gluster
13:19 karthik_us joined #gluster
13:20 msvbhat joined #gluster
13:21 A_bot joined #gluster
13:31 mhulsman joined #gluster
13:34 unclemarc joined #gluster
13:36 karthik_us joined #gluster
13:37 ira joined #gluster
13:40 shyam joined #gluster
13:46 oajs joined #gluster
13:48 atinm joined #gluster
13:49 baber joined #gluster
13:57 Philambdo joined #gluster
14:01 cholcombe joined #gluster
14:02 unlaudable joined #gluster
14:08 jiffin joined #gluster
14:13 rafi joined #gluster
14:13 plarsen joined #gluster
14:14 rafi joined #gluster
14:22 skylar joined #gluster
14:22 gem joined #gluster
14:22 kpease joined #gluster
14:24 kpease_ joined #gluster
14:25 Humble joined #gluster
14:30 kpease joined #gluster
14:30 mhulsman joined #gluster
14:32 mhulsman joined #gluster
14:33 mhulsman joined #gluster
14:40 grayeul joined #gluster
14:44 ankitr joined #gluster
14:45 grayeul hey -- I'm trying to resolve a situation with a replicated gluster setup (v3.5.9) -- and been trying to read thru the docs and understand what is going on, and learning a bit about the gfid stuff.
14:46 grayeul However, if I find file (or directory...) that has no trusted.gfid attribute.... is there a way to fix that?
14:48 grayeul My heal info shows no split-brain, but I have several copies of the same gfid showing up in heal-failed info....   and I know of a few directories that don't have gfid and therefore (I assume this is why) they don't work, even though I see dir entries on both bricks that should be supporting that dir.
14:51 nbalacha joined #gluster
14:54 elico joined #gluster
14:54 elico What is the equivalent of raid-6 in glusterfs?
14:55 victori joined #gluster
14:56 sloop joined #gluster
14:56 aravindavk joined #gluster
14:57 k0nsl joined #gluster
14:57 k0nsl joined #gluster
15:02 rjoseph joined #gluster
15:09 m0zes joined #gluster
15:10 grayeul joined #gluster
15:15 cloph grayeul: the gfid is expected to be the same - as on one brick it is in state A, while on another brick it is in state B, hence the split brain...
15:16 grayeul right.. but when I check heal info... it says there is no 'split-brain' problem... but it does list several gfids in the 'heal-failed' list.
15:17 grayeul and in just poking around, I've tried getting the gfid (via getfattr) from some files/directories -- and in some cases there is no gfid attribute (on either brick)
15:18 grayeul it is possible, that *I* did that.. and messed things up... I've been struggling for > 1 day trying to see what is up, and read somewhere about manually rsyncing files so the brick contents are good, and then clearing the attributes....
15:18 grayeul is there a way to re-index/re-create missing ones?
15:22 cloph elico: there is no direct mapping to raid levels. how many disks/bricks you can loose and still have fully operational volume depends on replica count and quota settings.
15:22 grayeul cloph - one of the gfids that comes up problematic in the heal-failed list is my $HOME directory (for my main account) -- which is kind of a pain... :)
15:22 cloph if you want maximum available disk-space, and there are not so many writes, a dispersed volume is closest I guess.
15:24 atinm joined #gluster
15:25 cloph grayeul: if they are in split-brain, they cannot be automatically healed. You need to tell gluster what is the correct copy.
15:25 cloph for directories, that's likely to just be the timestamp, so not really critical
15:26 grayeul ok... a) split-brain reports nothing, so I don't *think* they are in split brain.... b) how do I set/define the correct version (of a directory), and c) what if I have a directory that appears the same on both bricks, but not showing up via gluster -- and has no gfid in attr?
15:27 aravindavk joined #gluster
15:27 grayeul ... not sure what heal-failed, but not split-brain really implies...
15:29 gyadav joined #gluster
15:31 untoreh joined #gluster
15:32 elico cloph: I want to use 4 nodes with stripe 2 and replica 2 since it seems like the right compared to raid-6
15:32 cloph you don't really want stripe.
15:32 cloph @stripe
15:32 glusterbot cloph: (#1) Please see http://joejulian.name/blog/should-i-use-stripe-on-glusterfs/ about stripe volumes., or (#2) The stripe translator is deprecated. Consider enabling sharding instead.
15:33 elico cloph: is it your writing?
15:33 cloph no, not my blog or bot (but JoeJulian's - he knows way more about gluster than I do :-))
15:34 farhorizon joined #gluster
15:34 cloph for a 2-way distributed replica 2 consisting of 4 bricks, you cannot losse *any* two bricks though.
15:34 cloph you can only loose two if those are part of different replica sets.
15:35 cloph also, with two peers not available, you're hitting quota problems, as with only 50% up, you cannot tell whether it is a netsplit or hosts are really down.
15:35 cloph (you can tell it to ignore that though)
15:37 elico cloph: so what your suggestion? distribute and replica?
15:39 cloph depends all on your needs - but yes, instead of trying a "stripe" volume, use distributed replicated type.
15:40 elico cloph: I didn't knew that there was a distributed one.
15:41 elico In RH examples I found using google there was a stripe and replica.
15:41 untoreh is restarting all the glusterfs-server daemons the same as restarting a volume? do I need to restart a volume to apply changed configs?
15:43 pioto joined #gluster
15:44 nirokato joined #gluster
15:53 rwheeler joined #gluster
15:58 Guest98030 joined #gluster
16:01 rwheeler joined #gluster
16:03 edong23 joined #gluster
16:04 msvbhat joined #gluster
16:05 kkeithley don't use stripe, it's deprecated. Use disperse or shard instead.
16:05 amye joined #gluster
16:06 elico kkeithley: where are these documented?
16:06 kkeithley you should (only) start and stop the glusterfsd daemons via the gluster cli, i.e. `gluster volume {start|stop} $volname`
16:06 kkeithley http://glusterfs.readthedocs.io/en/latest/
16:06 glusterbot Title: Gluster Docs (at glusterfs.readthedocs.io)
16:06 wushudoin joined #gluster
16:07 wushudoin joined #gluster
16:08 armyriad joined #gluster
16:08 oajs joined #gluster
16:22 BatS9 elico: I'd guess a dispersed volume with redundancy 2
16:22 BatS9 Would be the closest you get to a raid6 setup
16:27 msvbhat joined #gluster
16:33 Seth_Karlo joined #gluster
16:35 skoduri joined #gluster
16:38 dspisla joined #gluster
16:46 jdossey joined #gluster
16:46 riyas joined #gluster
17:00 jiffin joined #gluster
17:15 [diablo] joined #gluster
17:17 mhulsman joined #gluster
17:18 mhulsman joined #gluster
17:23 kpease joined #gluster
17:24 kpease_ joined #gluster
17:32 gyadav_ joined #gluster
17:47 atinm joined #gluster
17:55 cholcombe joined #gluster
18:07 shutupsquare joined #gluster
18:26 elico BatS9: thanks
18:26 elico BatS9: did you meant replica 2? right?
18:27 farhorizon joined #gluster
18:29 elico ho I see it goes dispersed volume with redundancy.
18:32 cloph elico: note that dispersed volumes have a completely different kind of redundancy than replica volumes (be it dispersed or not)
18:32 vbellur joined #gluster
18:32 elico cloph: it took me a while to understand it but I was lacking couple things to illustrate it in my mind.
18:32 cloph you'll have an additional penalty to create all the erasure codes, so depends on your usecase whether it is better or not..
18:33 plarsen joined #gluster
18:39 elico I will try to see if someone talks about it in a video.
18:43 Peppard joined #gluster
18:46 cholcombe joined #gluster
18:47 Gambit15 elico, slideshare may have some breakdowns, although if you want a proper form of understanding, you'd be best looking for knowledgeable blog articles
18:47 Gambit15 Also, check out Red Hat's documentation
18:50 Gambit15 As simpler setup, which'd also provide better I/O, would be to use distributed replicated with arbiters. Although with 4 nodes, you'd only be able to lose 1 node from each replica pair
18:51 Gambit15 For example, my own initial setup uses 4 nodes in 2x(2+1). The arbiter brick for the first replica pair is on the first node of the second replica pair, and vice versa
18:54 Gambit15 Tecnically, if I lost both 1st nodes of each pair (which hold the arbiters), then that'd still break quorum, but that shouldn't be too much of a risk if you plan your redundancy well, and you could always manually disable quorum in the edge case of having to deal with such a situation.
18:55 Gambit15 At the moment, dist-rep seems to be a much more used & better documented case than dispersed.
19:01 ahino joined #gluster
19:19 shyam joined #gluster
19:22 elico Gambit15: thanks I will try to read the docs and watch videos first
19:22 cacasmacas joined #gluster
19:23 nh2 joined #gluster
19:26 nh2 joined #gluster
19:39 shyam joined #gluster
19:42 Seth_Karlo joined #gluster
19:43 oajs joined #gluster
19:44 Seth_Karlo joined #gluster
19:51 mhulsman joined #gluster
19:53 mhulsman joined #gluster
20:03 vbellur joined #gluster
20:37 glustin joined #gluster
20:38 derjohn_mob joined #gluster
20:56 Homastli joined #gluster
20:58 arpu joined #gluster
20:58 Homastli I can't get my gluster nfs server running - it doesn't show in gluster vol status. Do I need to install any other packages?
21:01 JoeJulian Homastli: 3.8+ does not start nfs by default. "gluster volume set $vol nfs.disable false"
21:03 Homastli aha, thanks
21:11 Gambit15 JoeJulian, remember that issue I had with a cloned volume only appearing on the peer I ran the command on? ...and I was neither able to delete the volume via the cli or remove its directory under the vols dir (it only existed on the one peer, but glusterd kept recreating it)
21:12 JoeJulian yep
21:12 Gambit15 For some bizzare reason, the volume just appeared & started working normally on the other peers some 58 hours later
21:12 JoeJulian smh
21:12 Gambit15 According to the logs, the peers just started treating it as if it always existed.
21:13 Gambit15 V. odd1
21:13 Gambit15 *!
21:13 Gambit15 Anyway, just an FYI - being so peculiar 'n all
21:14 JoeJulian was it actually 58 hours, or was it just not noticed until 58 hours later?
21:29 ij joined #gluster
21:29 farhorizon joined #gluster
21:37 shyam joined #gluster
21:47 baber joined #gluster
22:05 siel joined #gluster
22:05 Acinonyx joined #gluster
22:06 Peppard joined #gluster
22:07 shutupsq_ joined #gluster
22:11 d0nn1e joined #gluster
22:15 Vapez_ joined #gluster
22:19 shutupsquare joined #gluster
22:20 shutups__ joined #gluster
22:25 rastar joined #gluster
22:28 farhorizon joined #gluster
22:31 Karan joined #gluster
22:43 jerrcs_ joined #gluster
22:51 chjohnst joined #gluster
23:00 Klas joined #gluster
23:21 moneylotion joined #gluster
23:22 jockek joined #gluster
23:26 jdossey joined #gluster
23:36 jockek joined #gluster
23:37 nishanth joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary