Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2016-01-06

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:01 haomaiwa_ joined #gluster
00:05 togdon joined #gluster
00:13 MTO joined #gluster
00:16 MTO Hello! I've got a cluster that is mounting with all files and directories with permissions 755, so I can't write to it. I've tried unmounting it and fixing the perms of the mount-point, but its not helping. Is there some kind of user-mapping going on that I'm not aware of?
00:17 F2Knight joined #gluster
00:21 JoeJulian Nope. Just uid and mode like any posix filesystem.
00:26 MTO Dunno why the chmod -R +w /gfsvol didn't work then...
00:27 F2Knight joined #gluster
00:27 JoeJulian selinux?
00:33 MTO Its not installed by default with debian jessie I thought. Doesn't look like its active, anyhow.
00:36 F2Knight_ joined #gluster
00:39 JoeJulian Ok, last I can suggest is to check the client log for errors. You should be able to change permissions like normal.
00:46 MTO Client log! Duh. I only looked at the server logs and dmesg on the client...
00:57 zhangjn joined #gluster
01:00 MTO Hmm. I see in the logs "Using Program GlusterFS 3.3", but I've installed 3.7.5, and I'm using a dispersed volume, which needs at least 3.6...
01:01 haomaiwa_ joined #gluster
01:17 EinstCrazy joined #gluster
01:18 hagarth MTO: 3.3 is the rpc program version. All current Gluster releases use rpc version 3.3.
01:29 zhangjn joined #gluster
01:30 julim joined #gluster
01:44 plarsen joined #gluster
01:48 sankarshan_ joined #gluster
01:49 overclk joined #gluster
01:51 Lee1092 joined #gluster
01:53 farhorizon joined #gluster
02:07 Rapture joined #gluster
02:09 harish joined #gluster
02:18 zhangjn joined #gluster
02:20 64MAAO7J5 joined #gluster
02:27 coredump joined #gluster
02:33 kdhananjay joined #gluster
02:46 zhangjn joined #gluster
02:47 ilbot3 joined #gluster
02:47 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
02:55 MTO hagarth: Ah. Thanks. So that's not a worry...
02:58 zhangjn joined #gluster
03:01 haomaiwa_ joined #gluster
03:12 n0b0dyh3r3 joined #gluster
03:17 coredump joined #gluster
03:17 Peppard joined #gluster
03:24 d0nn1e joined #gluster
03:29 B21956 joined #gluster
03:32 vmallika joined #gluster
03:42 farhorizon joined #gluster
03:45 farhoriz_ joined #gluster
03:53 atinm joined #gluster
03:59 RameshN joined #gluster
03:59 nathwill joined #gluster
04:01 haomaiwa_ joined #gluster
04:02 kdhananjay joined #gluster
04:06 arcolife joined #gluster
04:09 nbalacha joined #gluster
04:09 calavera joined #gluster
04:15 overclk joined #gluster
04:15 kanagaraj joined #gluster
04:17 farhorizon joined #gluster
04:22 sakshi joined #gluster
04:23 poornimag joined #gluster
04:25 aravindavk joined #gluster
04:29 bharata-rao joined #gluster
04:30 calavera joined #gluster
04:34 shubhendu joined #gluster
04:35 farhorizon joined #gluster
04:36 nehar joined #gluster
04:40 rafi joined #gluster
04:48 pppp joined #gluster
04:50 gem joined #gluster
04:56 jiffin joined #gluster
04:57 rafi joined #gluster
05:01 haomaiwa_ joined #gluster
05:07 ndarshan joined #gluster
05:10 kasturi joined #gluster
05:12 dgbaley joined #gluster
05:13 ramky joined #gluster
05:17 kotreshhr joined #gluster
05:17 jiffin1 joined #gluster
05:19 RameshN joined #gluster
05:26 zhangjn joined #gluster
05:26 hgowtham joined #gluster
05:26 ashiq joined #gluster
05:27 farhoriz_ joined #gluster
05:28 Apeksha joined #gluster
05:30 Manikandan joined #gluster
05:36 hagarth joined #gluster
05:36 zhangjn joined #gluster
05:37 hgowtham joined #gluster
05:39 Saravana_ joined #gluster
05:41 jobewan joined #gluster
05:43 anil joined #gluster
05:44 kshlm joined #gluster
05:47 ppai joined #gluster
05:54 F2Knight joined #gluster
05:55 nehar joined #gluster
05:56 F2Knight_ joined #gluster
05:58 Bhaskarakiran joined #gluster
06:00 zhangjn_ joined #gluster
06:01 haomaiwang joined #gluster
06:02 karnan joined #gluster
06:02 DV joined #gluster
06:04 edong23 joined #gluster
06:08 zhangjn joined #gluster
06:09 arcolife joined #gluster
06:11 kdhananjay joined #gluster
06:12 MACscr ok, gluster really uses cpu resources to resync, is that correct? how about ram? my little test machines might only have 1gb on them
06:12 MACscr but 8 cores
06:12 vimal joined #gluster
06:14 dusmant joined #gluster
06:16 zhangjn joined #gluster
06:16 rafi joined #gluster
06:19 kdhananjay joined #gluster
06:43 atalur joined #gluster
06:48 spalai joined #gluster
06:50 R0ok_ joined #gluster
06:57 vmallika joined #gluster
06:58 kaushal_ joined #gluster
07:01 haomaiwa_ joined #gluster
07:07 farhorizon joined #gluster
07:08 Javezim joined #gluster
07:09 SOLDIERz joined #gluster
07:11 Javezim Hey all, Just wondering if anyone here has used Glusterfs to host their Windows Hyper-V VHDX Disks? If with Samba too that would be great. Looking for information around how the Read/Write Works, Any tweaks that can be performed on Gluster/Samba to improve performance and overall if this is a good solution for hosting a fair amount of VMS.
07:13 farhorizon joined #gluster
07:15 zhangjn joined #gluster
07:18 Manikandan joined #gluster
07:19 tom[] joined #gluster
07:21 farhorizon joined #gluster
07:31 jtux joined #gluster
07:34 arcolife joined #gluster
07:34 Bhaskarakiran joined #gluster
07:35 nehar joined #gluster
07:39 farhorizon joined #gluster
07:45 farhorizon joined #gluster
07:45 mhulsman joined #gluster
07:46 jtux joined #gluster
07:49 farhorizon joined #gluster
07:51 the-me joined #gluster
07:56 nehar joined #gluster
07:57 farhorizon joined #gluster
07:58 kdhananjay joined #gluster
08:01 haomaiwa_ joined #gluster
08:08 itisravi joined #gluster
08:12 mobaer joined #gluster
08:21 sankarshan_ joined #gluster
08:22 jwaibel joined #gluster
08:25 Saravana_ joined #gluster
08:28 jiffin joined #gluster
08:31 farhorizon joined #gluster
08:32 EinstCra_ joined #gluster
08:38 jiffin joined #gluster
08:42 Saravana_ joined #gluster
08:42 XpineX joined #gluster
08:43 aravindavk joined #gluster
08:48 rwheeler joined #gluster
08:53 dusmant joined #gluster
09:01 haomaiwa_ joined #gluster
09:04 jiffin1 joined #gluster
09:04 ahino joined #gluster
09:08 kshlm joined #gluster
09:12 zhangjn joined #gluster
09:18 Slashman joined #gluster
09:18 kovshenin joined #gluster
09:22 ashiq joined #gluster
09:24 poornimag joined #gluster
09:27 Manikandan joined #gluster
09:31 ramky joined #gluster
09:42 EinstCrazy joined #gluster
09:46 Apeksha joined #gluster
09:47 shubhendu joined #gluster
09:49 gem joined #gluster
09:51 dusmant joined #gluster
09:51 zhangjn joined #gluster
10:01 haomaiwa_ joined #gluster
10:05 ramky joined #gluster
10:08 Apeksha joined #gluster
10:17 dusmant joined #gluster
10:18 zhangjn joined #gluster
10:19 Bhaskarakiran joined #gluster
10:21 ashiq joined #gluster
10:21 hgowtham_ joined #gluster
10:21 shubhendu joined #gluster
10:25 hgowtham joined #gluster
10:28 harish_ joined #gluster
10:44 lanning joined #gluster
10:47 ramky joined #gluster
10:49 jwd joined #gluster
11:01 haomaiwa_ joined #gluster
11:05 jwaibel joined #gluster
11:16 zhangjn joined #gluster
11:24 nehar joined #gluster
11:28 gowtham joined #gluster
11:41 shubhendu joined #gluster
11:43 Humble joined #gluster
11:50 hgowtham joined #gluster
11:52 jwd joined #gluster
11:55 jiffin1 joined #gluster
11:55 sankarshan_ joined #gluster
11:56 kshlm Weekly Gluster Community Meeting is starting in 5 minutes in #gluster-meeting
12:01 jdarcy joined #gluster
12:01 bluenemo joined #gluster
12:02 Bhaskarakiran joined #gluster
12:03 gem joined #gluster
12:05 arcolife joined #gluster
12:11 nbalacha joined #gluster
12:21 DonLin Javezim: I'm justing testing myself with VMware VM's over NFS. But for Windows you might try iSCSI http://www.gluster.org/community/documentation/index.php/GlusterFS_iSCSI to see how it works out (I don't have any experience with it). Also ndevos referred me to this url: https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3.1/html/Configuring_Red_Hat_Enterprise_Virtualization_with_
12:21 DonLin Red_Hat_Gluster_Storage/chap-Hosting_Virtual_Machine_Images_on_Red_Hat_Storage_volumes.html#Configuring_Volumes_Using_the_Command_Line_Interface
12:25 spalai left #gluster
12:29 ira joined #gluster
12:34 overclk joined #gluster
12:34 liviudm_ joined #gluster
12:39 liviudm_ Hi all, I'm working on a solution based on Docker Swarm and I was thinking of using glusterfs for storing the volumes attached to the Docker containers. Most of these volumes are going to be attached to Mongo, MySQL and Postgres instances, anyone here has any experience with this? The plan is to be able to spin up the containers on another server in case of failure, but I'm not sure if having the database data on gluster is a good idea in terms of
12:39 liviudm_ performance and if there are any best practices
12:40 liviudm_ I'm new to gluster, so any piece of advise is highly appreciated :)
12:41 _NiC left #gluster
12:42 DonLin liviumd_: Doesn't Docker save all of the files directly to disk instead of an image file? (In which case regular guidelines on putting a database on Gluster storage would apply)
12:43 DonLin I read somewhere that it wasn't recommended, although it didn't really specify why. (I'm also rather new to glusterfs BTW, so don't take my word for it)
12:43 liviudm_ yeah, it puts them on disk, you basically "mount" a local directory inside the docker container and is treated as non-ephemeral storage
12:43 DonLin than it shouldn't matter that you're using Docker
12:44 DonLin there might be more information on the subject if you search on Docker and database
12:44 liviudm_ yes, I was reading the same, but I was reading some success stories as well, so that's why I'm a bit confused...
12:44 DonLin Seems to me that Gluster has become a lot better/mature in the last few years
12:45 DonLin So it might not be an issue anymore
12:45 zhangjn joined #gluster
12:46 zhangjn joined #gluster
12:47 zhangjn joined #gluster
12:48 liviudm_ I'm between gluster and ceph, unfortunately I can't seem to find enough information on any related to a similar case, and setting up a docker cluster without a way of moving volumes around in case of server failure is pointless
12:51 DonLin ceph seems to require an additional metadata server and it seems less mature
12:52 DonLin but why wouldn't you be able to start a docker instance on another server?
12:53 EinstCrazy joined #gluster
12:55 liviudm_ if it's a database, it will also need its associated data. If the server it was on is down, there's no way to recover that data, so it's not that useful to start the database with no data
12:56 DonLin You can create a replicated Gluster volume so that if one gluster server is down the other one takes over
12:56 DonLin or is that not what you mean?
12:56 haomaiwa_ joined #gluster
12:56 bhuddah btw... liviudm_ about what number of hosts are we talking here?
12:57 liviudm_ DonLin that was my plan, but I'm reading mixed opinions if it's a good idea to run databases on top of gluster...
12:57 kanagaraj joined #gluster
12:57 Javezim Hey all, Just wondering if anyone here has used Glusterfs to host their Windows Hyper-V VHDX Disks? If with Samba too that would be great. Looking for information around how the Read/Write Works, Any tweaks that can be performed on Gluster/Samba to improve performance and overall if this is a good solution for hosting a fair amount of VMS.
12:58 plarsen joined #gluster
12:58 liviudm_ bhuddah I'll have 3 "masters" and any variable number of client nodes, depending on the current need
12:58 DonLin liviumdm_: Also depends on how critical it is I suppose.. I haven't used gluster in production yet
12:59 DonLin Javezim: Did you see my earlier response?
12:59 liviudm_ it's going to be production, so if anything goes down, somebody is going to scream :))
12:59 bhuddah liviudm_: i mean physically.
12:59 liviudm_ bhuddah it's going to be AWS
13:00 bhuddah liviudm_: i bet there is a aws product that delivers scalable storage.
13:00 liviudm_ there will be
13:00 liviudm_ AWS EFS
13:00 liviudm_ it's currently in preview and available on request in us west, I need it in us east...
13:01 liviudm_ and I would have my doubts to use it during preview for a production app...
13:02 bhuddah ah ,right. i read about that :)
13:02 liviudm_ believe me, I would be more than happy to use it if it was available :)
13:02 DonLin I'm also still doubting do put glusterfs in production for VMware VM's, I would never dare to put anything important in the public cloud though :)
13:03 liviudm_ DonLin with the right level of redundancy you're fine, never had any problems as long as the setup was done well
13:04 liviudm_ of course, a good setup is going to cost you a lot, but it's not like buying physical servers and putting them in datacenters around the world is going to be cheap
13:05 RameshN joined #gluster
13:05 DonLin well I don't know, I read some pretty bad stories and 1.6Ghz CPU's doesn't really seem very inviting either
13:06 bhuddah liviudm_: short researching has suggested that there are 3rd party providers for HA NFS storage in AWS if you care about that.
13:06 bhuddah liviudm_: that way you could bridge the gap until aws efs is mature.
13:07 bhuddah i know this is sort of a side track to the actual discussion...
13:07 liviudm_ DonLin I would never recommend t2.micro instances for anything other than testing, but you got choice... it all depends on the budget, for some projects is fine, for some is not :)
13:08 DonLin True :)
13:08 Javezim Donlin: Oh woops yeah just saw it then. I tested Glusterfs over NFS with Hyper-V and ran into some big issues so I guess Samba and NFS are both out. Will give ISCSI a go. I do have a few VMS through Vmware that I was hoping to eventually put onto the gluster cluster and share either via samba or NFS, may I ask how it is going with NFS so far?
13:09 liviudm_ bhuddah yes, I found some too, unfortunately the company will not approve to use some other company just to provide NFS storage, so it's going to have to be a handmade solution
13:09 liviudm_ the world is not always perfect :)
13:10 bhuddah liviudm_: yeah. unfortunate timing then. i guess then you gotta do a leap of faith there and test the approach
13:10 DonLin Javezim: Testing is going fine with one Windows VM and one Linux VM. Also failover with CTDB works fine.
13:11 EinstCrazy joined #gluster
13:13 dusmant joined #gluster
13:25 Akee joined #gluster
13:26 Javezim Ok so we've established Hyper-V Should work on Glusterfs shared with ISCSI, but does anyone here have any experience with using Samba and it working? Just want to know if there is any way to get this?
13:30 ndevos Javezim: Hyper-V over Samba? maybe ira knows more about that
13:31 baoboa joined #gluster
13:38 zhangjn joined #gluster
13:47 tom[] i see a log messages like these when files are written to a mount https://gist.github.com/tom--/03fe189cc9136ff4ae0a "E: No data available occurred while creating symlinks" and associated W and I messages. it ammounts to a lot of log messages.
13:47 glusterbot tom[]: https://gist.github.com/tom's karma is now -8
13:47 glusterbot Title: gluster log flooding · GitHub (at gist.github.com)
13:48 tom[] i looked at bug reports and i think i can ignore these as it is a harmless gluster bug
13:48 tom[] glusterfs 3.4.2, Ubuntu 14.04.3 LTS, XFS 3.1.9, 3-node replication
13:48 tom[] can you suggest anything besides upgrading gluster or aggressive log rotation?
13:56 zhangjn joined #gluster
14:01 haomaiwa_ joined #gluster
14:09 atalur joined #gluster
14:17 arcolife joined #gluster
14:18 nangthang joined #gluster
14:34 skylar joined #gluster
14:36 harold joined #gluster
14:38 dgandhi joined #gluster
14:38 kdhananjay joined #gluster
14:39 chirino joined #gluster
14:39 dgandhi joined #gluster
14:41 dgandhi joined #gluster
14:42 dgandhi joined #gluster
14:44 dgandhi joined #gluster
14:45 dgandhi joined #gluster
14:46 dgandhi joined #gluster
14:47 jwaibel joined #gluster
14:48 dgandhi joined #gluster
14:49 julim joined #gluster
14:49 dgandhi joined #gluster
14:50 bennyturns joined #gluster
14:51 dgandhi joined #gluster
14:52 dgandhi joined #gluster
14:52 kovshenin joined #gluster
14:53 dgandhi joined #gluster
14:54 mhulsman1 joined #gluster
14:55 dgandhi joined #gluster
14:57 dgandhi joined #gluster
14:57 dgandhi joined #gluster
14:58 dgandhi joined #gluster
14:59 kshlm joined #gluster
14:59 atalur joined #gluster
15:02 21WAAPS2Q joined #gluster
15:07 kovsheni_ joined #gluster
15:13 kovshenin joined #gluster
15:16 vmallika joined #gluster
15:27 kovshenin joined #gluster
15:28 bowhunter joined #gluster
15:38 neofob joined #gluster
15:45 gem joined #gluster
15:50 MACscr joined #gluster
15:51 MACscr joined #gluster
15:51 MACscr joined #gluster
16:01 haomaiwa_ joined #gluster
16:03 mobaer joined #gluster
16:09 calavera joined #gluster
16:11 shyam joined #gluster
16:13 Wojtek joined #gluster
16:14 Wojtek Hello all
16:15 Wojtek Does anyone know the difference between the following volume options; cluster.metadata-self-heal; cluster.entry-self-heal; cluster.data-self-heal
16:15 jobewan joined #gluster
16:15 caveat- joined #gluster
16:16 ron-slc joined #gluster
16:18 JoeJulian @extended attributes
16:18 glusterbot JoeJulian: (#1) To read the extended attributes on the server: getfattr -m .  -d -e hex {filename}, or (#2) For more information on how GlusterFS uses extended attributes, see this article: http://pl.atyp.us/hekafs.org/index.php/2011/04/glusterfs-extended-attributes/
16:18 JoeJulian Wojtek: See the last article.
16:19 tom[] more research suggests the log errors i see (mentioned above) are because in my version, gluster is forbidding access to security.* xattrs when the volume is not mounted with selinux option http://review.gluster.org/#/c/12953/
16:19 glusterbot Title: Gerrit Code Review (at review.gluster.org)
16:19 tom[] what are the considerations for using the selinux mount option?
16:21 Ethical2ak joined #gluster
16:24 Wojtek Thanks JoeJulian, I'll read the article
16:26 Ethical2ak Hi everyone
16:27 Ethical2ak Is there anyone in here that problem with healing perfs ?
16:27 Ethical2ak have*
16:28 Ethical2ak or poor write perfs while the healing is processing
16:29 JoeJulian Ethical2ak: It's been reported from time to time. Nothing consistent and nobody's taken the time to diagnose anything, except one AWS user who upgraded his instance and cured his problem.
16:33 Ethical2ak I'm running on GlusterFS 3.7.5-1 . I got a 3 machines cluster. I got 2.3TB of data , for a total of 23 million files
16:35 farhorizon joined #gluster
16:35 Ethical2ak When one machine crash, i'm replacing the machine by a new one, with no data. I'm letting the healing process heal the entire empty node.  When the healing is processing, i got very low write perfs. All is good for read perfs.
16:36 JoeJulian If I were to guess, I would suspect that you're io bound on the new server.
16:37 Ethical2ak I heard that GlusterFS struggling with little file. Espacially when they are a lot of them.
16:37 Ethical2ak 2.3TB for 23 million file
16:37 Ethical2ak around 6kb by file.
16:39 Ethical2ak Does gluster have a maximum io speed ? like limited
16:39 JoeJulian There are lookups that have to be done in clustered systems to ensure the client isn't getting a stale version. Those lookups take time and that time is a greater percentage of the total read time when a file is small. There's no "struggle" there's just a general lack of understanding in the industry.
16:41 JoeJulian And no, there's no maximum io speed in gluster.
16:41 ajneil JoeJulian: thanks for your help yesterday, I am back up and stable with 2x CentOS6.7 and 1x CentOS 7 servers
16:41 RameshN joined #gluster
16:42 JoeJulian Ethical2ak: Use iotop and see where you're at with iops and throughput on the new server and see how close that is to your theoretical max.
16:42 JoeJulian ajneil: You're welcome. Happy to hear it. :)
16:43 ajneil Tomorrow I'll start upgrading the other nodes and be sure to back up and restore the var/lib/glusterd  to save myswlf a lot of pain
16:44 ajneil or is it only /var/lib/glusterd/glusterd.info I really need to save?
16:45 JoeJulian I would do the whole /var/lib/glusterd tree if it were me.
16:45 ajneil gotcha - thanks again
16:48 DV__ joined #gluster
16:50 Ethical2ak Is there any Kernel version that is optimal for GlusterFS ?
16:50 Ethical2ak For perfs
16:51 atalur joined #gluster
17:03 F2Knight joined #gluster
17:04 arcolife joined #gluster
17:07 haomaiwang joined #gluster
17:08 geewiz joined #gluster
17:17 d0nn1e joined #gluster
17:20 JoeJulian Interesting question. I've not seen any perf testing done wrt specific kernels.
17:27 Slashman joined #gluster
17:33 CyrilPeponnet joined #gluster
17:35 calavera joined #gluster
17:47 nathwill joined #gluster
17:48 bennyturns joined #gluster
17:49 Manikandan joined #gluster
17:53 harold joined #gluster
17:54 JoeJulian Ethical2ak: If your writes are slow during heal, something's clearly bottlenecking. Gluster has priority queues and client traffic is supposed to be scheduled higher than self-heal traffic so it /shouldn't/ be bottlenecking within gluster. That leaves network, cpu, memory, and io. Any of those should be measurable.
17:55 JoeJulian If it is gluster, and you should have the hardware to prove it in my opinion, I would have to suggest you file a bug report.
17:55 glusterbot https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS
17:55 JoeJulian As for potential tweaks, you might try changing io-thread settings and see if that helps.
17:57 mobaer joined #gluster
17:57 Slashman joined #gluster
17:58 Ethical2ak performance.io-thread-count: 64
17:58 Ethical2ak that the setting i have
17:58 Ethical2ak performance.io-cache: on
17:58 Ethical2ak performance.cache-max-file-size: 64KB
17:58 Ethical2ak performance.write-behind-window-size: 8MB
17:59 JoeJulian frankly, I haven't ever seen any differences from anything but io-threads, but maybe I'm just not taxing the software hard enough to show anything.
18:03 Wojtek JoeJulian: Is there a way to throttle the gluster heal operations? Our writes are not slow per se, the speed is fine, but the number of operations per second is greatly diminished during a healing operation
18:09 JoeJulian Not really, no. You could maybe use ionice on the self heal daemons (pgrep -f glustershd) but the cli only has options to disable.
18:11 Wojtek I believe I have attempted that, as well as trying to cgroups the process but the results were not as expected - the whole cluster became slower :)
18:11 Wojtek And regarding my earlier question; I read your article about the extra attributes, and I believe this is the 'cluster.metadata-self-heal' option. 'cluster.metadata-self-heal' seems to refer to the actual data of the files. But that leaves the 'cluster.entry-self-heal'. I'm not sure to what it reffers to
18:13 JoeJulian Namespace operations – create, delete, rename, etc
18:14 Wojtek Oki! Thanks
18:20 Ethical2ak Thanks Joe we'll look into it.
18:21 jwd joined #gluster
18:24 PsionTheory joined #gluster
18:24 mhulsman joined #gluster
18:29 amye joined #gluster
18:32 Rapture joined #gluster
18:34 nottc joined #gluster
18:46 MTO joined #gluster
18:47 JoeJulian Ethical2ak: https://bugzilla.redhat.com/show_bug.cgi?id=1296271
18:47 glusterbot Bug 1296271: low, unspecified, ---, bugs, NEW , Enhancement: Configure maximum iops for shd
18:48 ahino joined #gluster
18:50 ChrisNBlum joined #gluster
18:54 Wojtek Thanks JoeJulian
18:54 Ethical2ak wow thanks Joe
19:01 kovshenin joined #gluster
19:27 jobewan joined #gluster
19:45 kovshenin joined #gluster
20:00 mhulsman joined #gluster
20:05 volga629 joined #gluster
20:06 volga629 Hello Everyone,  what is mean if I see file like this
20:06 volga629 -??????????     ? ?      ?         ?            ? canlwww01.qcow2
20:06 volga629 is mean file lock ?
20:10 hagarth volga629: check if one of your bricks is not available. it does look like the result of an incomplete readdir operation.
20:10 volga629 if I do gluster vol status
20:10 volga629 it shows all 3 online
20:11 volga629 I am not sure, but when create additional disk for libvirt it breaks gluster
20:11 volga629 no matter how many nodes
20:11 volga629 in gluster it don't want to stay up
20:12 JoeJulian volga629: check your client log for clues.
20:12 bluenemo joined #gluster
20:13 volga629 0-datapoint02-client-1: remote operation failed [Transport endpoint is not connected]
20:15 volga629 [client.c:2042:client_rpc_notify] 0-datapoint02-client-1: disconnected from datapoint02-client-1. Client process will keep trying to connect to glusterd until brick's port is available
20:15 JoeJulian well, there's your reason. now you just need to track down why.
20:16 JoeJulian If the service is listening, then I would start looking at network
20:17 volga629 all servers is ping of 1.0 ms
20:17 volga629 they same datacenter
20:17 JoeJulian was this working before?
20:18 volga629 yes
20:18 JoeJulian what changed?
20:18 volga629 for one week
20:18 volga629 0-glustershd: readv on /var/run/gluster/78f2880705db2c3b225195bf402af889.socket failed (Invalid argument)
20:19 volga629 I tried qemu-img create -f qcow2 gluster://`hostname`/datapoint02/canlrt03.qcow2 10G
20:20 volga629 and it brought down the whole think
20:21 volga629 I suppose 3 nodes should stay up
20:21 JoeJulian Perhaps one of your bricks (client-1 would be the second brick in the volume) has stopped working correctly. Kill the process show in volume status and start it again with "gluster volume start datapoint02 force"
20:22 JoeJulian If you can figure out how to make that failure happen reliably, file a bug report so it can get fixed.
20:22 glusterbot https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS
20:22 volga629 [2016-01-06 17:02:25.263211] E [MSGID: 108006] [afr-common.c:3880:afr_notify] 0-datapoint02-replicate-0: All subvolumes are down. Going offline until atleast one of them comes back up.
20:23 volga629 that error from qemu-img
20:23 volga629 on this brick running some vm right now
20:24 F2Knight joined #gluster
20:24 volga629 I need shutdown all to kill gluster
20:27 volga629 gluster volume start datapoint02 force
20:27 volga629 volume start: datapoint02: success
20:28 volga629 here log
20:28 volga629 http://fpaste.org/307978/45211207/
20:28 glusterbot Title: #307978 Fedora Project Pastebin (at fpaste.org)
20:45 volga629 only this image affected
20:45 volga629 ls: cannot access canlwww01.qcow2: Input/output error
20:45 volga629 do I need stop brick and start brick ?
20:46 JoeJulian volga629: Sounds like ,,(split-brain)
20:46 glusterbot volga629: To heal split-brains, see https://gluster.readthedocs.org/en/release-3.7.0/Features/heal-info-and-split-brain-resolution/ For additional information, see this older article https://joejulian.name/blog/fixing-split-brain-with-glusterfs-33/ Also see splitmount https://joejulian.name/blog/glusterfs-split-brain-recovery-made-easy/
20:47 volga629 canlwww01.qcow2 - Is in split-brain
20:47 shyam joined #gluster
20:49 foster joined #gluster
20:51 volga629 how to fix it ?
20:52 JoeJulian I gave you the link to the documentation for that.
20:53 volga629 why this would happened if 3 bricks in place ?
20:56 JoeJulian If you haven't enabled quorum, it's possible.
20:56 haomaiwa_ joined #gluster
20:59 volga629 Lookup failed on /var/lib/vm_store/tmp/canlwww01.qcow2:No such file or directory
20:59 volga629 on heal command
20:59 volga629 bu ls -la works
21:00 volga629 How need to enable quorum properly
21:00 JoeJulian @quorum
21:00 JoeJulian hmm, I need to create that factoid
21:01 volga629 I set cluster.self-heal-daemon: enable
21:01 volga629 this fail gluster volume heal test split-brain bigger-file
21:02 Wojtek http://www.gluster.org/community/documentation/index.php/Gluster_3.2:_Setting_Volume_Options#cluster.quorum-type
21:05 MTO left #gluster
21:06 onebree joined #gluster
21:06 volga629 is there preference or best practice how to set it right ?
21:07 onebree hello all. I am trying to mount, as per the gluster docs (manual mounting). Do I need to start the glusterd (or glusterfsd) daemon first?
21:07 onebree My error says DNA cannot be resolved, yet I am able to ping the hosts I am trying to reach.
21:07 calavera joined #gluster
21:07 volga629 I got Number of healed entries: 1
21:08 volga629 file is fixed
21:11 volga629 that options is configured http://fpaste.org/308000/52114695/
21:11 glusterbot Title: #308000 Fedora Project Pastebin (at fpaste.org)
21:11 JoeJulian onebree: no, glusterd does not need to be even installed on a client.
21:13 onebree Okay. IDK why I am getting my error then. My structure is as follows: files from path/to/local are copies into path/to/client, which I want gluster to do its magic and replicate onto 3 bricks /export/path/to/brick
21:21 JoeJulian paste your client log
21:24 volga629 gluster volume set all cluster.server-quorum-ratio 51% is this good value
21:24 bowhunter joined #gluster
21:33 onebree JoeJulian:  do you want MY log?
21:33 JoeJulian That will require that your clients must be able to connect to 51% or more of your servers or they will go read-only. Assuming 3 servers, that will mean the client must be able to see two of them. That will prevent split-brain.
21:33 JoeJulian onebree: yes, so I can see the actual error message and, potentially, look at the source to see what it's trying to do.
21:35 volga629 I see
21:36 onebree Here is my log. devN servers are where the bricks are located. https://gist.github.com/onebree/c052c9fe6fdf1648172b
21:36 glusterbot Title: gs.log · GitHub (at gist.github.com)
21:39 volga629 gluster volume set datapoint02 cluster.server-quorum-ratio 51%
21:39 volga629 volume set: failed: Not a valid option for single volume
21:41 volga629 do I need stop all bricks ?
21:41 onebree JoeJulian: please refresh the gist -- I just added the command I ran
21:47 JoeJulian onebree: Well, you're doing it right, but that log isn't helping us at all. Can you try again but set the log-level=DEBUG?
21:53 bennyturns joined #gluster
21:54 volga629 if I have 3 bricks do I need stop one to set 51% ?
21:58 onebree The following command, and it says "--log-level=DEBUG" is unrecognized:
21:58 onebree sudo mount -t glusterfs -o log-file=/home/hstevens/gs.log --log-level=DEBUG dev0:hunter_test /home/hstevens/GLUSTER/volume_path
21:59 onebree I will be back tomorrow, and do anything you suggest then (16 hours) :-)
22:00 volga629 How I set this for all volume set: failed: Not a valid option for single volume
22:02 jvandewege_ joined #gluster
22:04 VeggieMeat_ joined #gluster
22:04 JoeJulian volga629: I'm sorry, I also just realized I told you the wrong thing. server-quorum stops the bricks for a server that loses quorum. So if serverA loses communications with both serverB and serverC, serverA will stop serving bricks. What I described earlier was based on cluster.quorum-type and cluster.quorum-count.
22:04 ccha5 joined #gluster
22:05 bowhunter joined #gluster
22:05 XpineX joined #gluster
22:05 JoeJulian I believe to be able to set the cluster.server-quorum-ratio, you have to set the cluster.server-quorum-type
22:05 the-me joined #gluster
22:06 liewegas joined #gluster
22:06 Peppard joined #gluster
22:06 volga629 gluster volume set datapoint02 cluster.server-quorum-type server
22:06 volga629 gluster volume set datapoint02 cluster.server-quorum-type auto
22:07 DV joined #gluster
22:08 volga629 that what is set
22:08 d0nn1e joined #gluster
22:09 fyxim joined #gluster
22:09 k-ma joined #gluster
22:12 volga629 that worked
22:12 volga629 gluster volume set all cluster.server-quorum-ratio 51
22:13 volga629 51%
22:14 mobaer joined #gluster
22:14 volga629 is another option which might help minimize situation as happened today /
22:14 volga629 ?
22:16 volga629 monitor glusterfs
22:16 JoeJulian some people like using nagios for that. We're using consul here.
22:17 volga629 We use check_mk
22:17 volga629 going check it
22:18 volga629 but from gluster prospective anything else can be done ?
22:18 virusuy joined #gluster
22:19 hagarth joined #gluster
22:19 JoeJulian Well, like a mentioned, you've set a server quorum. You can also set a quorum that's used on the client. That would help in case the client (only) loses connection to some servers even if the servers see each other. If your clients are only on your servers, then it's irrelevant.
22:21 jermudge- joined #gluster
22:23 neofob joined #gluster
22:24 volga629 I see, in my case clients is the servers
22:24 volga629 I will try add monitoring today
22:24 volga629 that it page me
22:24 volga629 I hope it will be more stable right now
22:24 volga629 thank you for you help and time
22:27 JoeJulian You're welcome.
22:34 harish_ joined #gluster
22:52 bennyturns joined #gluster
22:58 amye joined #gluster
23:02 RedW joined #gluster
23:08 F2Knight joined #gluster
23:29 owlbot joined #gluster
23:50 farhorizon joined #gluster
23:56 Rapture joined #gluster
23:58 JesperA joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary