Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2016-03-03

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:01 tessier Hmm...the two CentOS 7 machines are having the same trouble. But the CentOS 5 machine I have connected is not....wonder what that means.
00:01 haomaiwa_ joined #gluster
00:08 sebamontini joined #gluster
00:11 JoeJulian I may have been looking at this backwards. Run glusterd --debug on the host being probed, not the host that's probing.
00:11 JoeJulian tessier: ^
00:15 sebamontini joined #gluster
00:28 shlant joined #gluster
00:28 shlant hi all. anyone have a suggestion for working around this bug? https://bugs.launchpad.net/ubunt​u/+source/glusterfs/+bug/1382989
00:28 sebamontini joined #gluster
00:28 glusterbot Title: Bug #1382989 “glusterfs-client is build without /usr/bin/fusermo...” : Bugs : glusterfs package : Ubuntu (at bugs.launchpad.net)
00:29 shlant fuse exists on 14.04 but not fuse-utils
00:32 JoeJulian My first suggestion is not to use the decrepit broken downstream packages. :)
00:32 JoeJulian @ppa
00:32 glusterbot JoeJulian: The official glusterfs packages for Ubuntu are available here: 3.5: http://goo.gl/6HBwKh 3.6: http://goo.gl/XyYImN 3.7: https://goo.gl/aAJEN5 -- See more PPAs for QEMU with GlusterFS support, and GlusterFS QA releases at https://launchpad.net/~gluster -- contact semiosis with feedback
00:35 amye joined #gluster
00:37 Chinorro joined #gluster
00:41 elitecoder joined #gluster
00:41 itisravi joined #gluster
00:43 elitecoder So I'm getting this error. My two servers, running in Replication mode are functional by the way...
00:43 elitecoder [2016-03-03 00:29:12.593240] E [name.c:147:client_fill_address_family] 0-glusterfs: transport.address-family not specified. Could not guess default value from (remote-host:(null) or transport.unix.connect-path:(null)) options
00:44 elitecoder How can I set this option?
00:44 elitecoder Or rather, what's the best way.
00:44 elitecoder I might be able to go into the glusterd.vol file, I don't know...
00:45 elitecoder I tried a gluster vol set help and the brick on the box i ran the command on seems to have crashed sooo
00:51 mowntan joined #gluster
00:53 tessier joined #gluster
00:56 shlant1 joined #gluster
00:56 elitecoder Is the only option to manually set the transport.address-family in all .vol files?
00:57 JoeJulian Never seen anyone with that problem before.
00:59 JoeJulian elitecoder: When are you seeing that error?
01:01 haomaiwa_ joined #gluster
01:01 elitecoder JoeJulian: I think it keeps popping up constantly
01:02 elitecoder Let me check if I'm still getting it now
01:02 JoeJulian Where?
01:03 elitecoder Oh hm
01:03 elitecoder /var/log/glusterfs/cli.log
01:04 elitecoder Ah, started up the cli and it popped up again
01:05 elitecoder I kind of don't care much about that... The CLI is working fine.
01:06 elitecoder Yeah it keeps getting logged while I'm in the CLI
01:16 theron joined #gluster
01:17 EinstCrazy joined #gluster
01:21 elitecoder I guess I can ignore it... laterz
01:40 pppp joined #gluster
01:56 theron joined #gluster
01:58 baojg joined #gluster
02:02 haomaiwa_ joined #gluster
02:07 nishanth joined #gluster
02:14 Lee1092 joined #gluster
02:18 xavih_ joined #gluster
02:18 liewegas_ joined #gluster
02:18 zoldar_ joined #gluster
02:18 purpleid1a joined #gluster
02:19 arcolife joined #gluster
02:22 overclk joined #gluster
02:22 cogsu joined #gluster
02:23 ChrisHolcombe joined #gluster
02:24 chirino joined #gluster
02:25 unlaudable joined #gluster
02:25 jbrooks joined #gluster
02:26 arcolife joined #gluster
02:27 pjrebollo joined #gluster
02:39 chirino_m joined #gluster
02:46 harish_ joined #gluster
02:46 amye joined #gluster
02:51 nangthang joined #gluster
02:59 pjrebollo joined #gluster
03:01 edong23 joined #gluster
03:01 haomaiwa_ joined #gluster
03:11 jbrooks joined #gluster
03:15 shlant joined #gluster
03:16 shlant if my client and server are the same host, can I mount my gluster volume to the same directory it is created with?
03:16 shlant like I have /data
03:17 shlant I create a volume @ /data/gluster
03:17 shlant can I then mount to there?
03:17 shlant or /data/gluster/blah?
03:21 jwang_ joined #gluster
03:23 Chr1st1an_ joined #gluster
03:23 morse_ joined #gluster
03:24 ccoffey_ joined #gluster
03:25 misc_ joined #gluster
03:25 _nixpani1 joined #gluster
03:25 DJCl34n joined #gluster
03:25 cogsu joined #gluster
03:25 shortdudey123_ joined #gluster
03:25 mzinkf joined #gluster
03:25 _nixpani1 joined #gluster
03:25 Nuxr0 joined #gluster
03:25 p8952_ joined #gluster
03:25 JonathanS joined #gluster
03:25 saltsa_ joined #gluster
03:25 marlinc_ joined #gluster
03:26 frakt_ joined #gluster
03:26 wiza_ joined #gluster
03:26 JoeJulian_ joined #gluster
03:26 Chinorro joined #gluster
03:26 rossdm joined #gluster
03:26 DJClean joined #gluster
03:26 Ulrar joined #gluster
03:27 troj joined #gluster
03:27 samikshan joined #gluster
03:27 nangthang joined #gluster
03:27 atinm joined #gluster
03:28 mpingu joined #gluster
03:28 Ramereth|home joined #gluster
03:28 glisignoli joined #gluster
03:31 dblack joined #gluster
03:36 scuttle joined #gluster
03:38 baojg joined #gluster
03:39 fsimonce joined #gluster
03:40 skoduri joined #gluster
04:04 itisravi joined #gluster
04:10 kanagaraj joined #gluster
04:20 pur joined #gluster
04:21 DV joined #gluster
04:22 Chinorro joined #gluster
04:28 DV joined #gluster
04:30 harish_ joined #gluster
04:30 amye joined #gluster
04:30 shubhendu joined #gluster
04:31 DV joined #gluster
04:32 haomaiwa_ joined #gluster
04:33 karthikfff joined #gluster
04:35 shubhendu joined #gluster
04:37 calavera joined #gluster
04:46 gem joined #gluster
04:53 calavera joined #gluster
04:57 ppai joined #gluster
05:01 haomaiwa_ joined #gluster
05:01 aravindavk joined #gluster
05:08 ndarshan joined #gluster
05:09 RameshN joined #gluster
05:09 itisravi joined #gluster
05:11 sakshi joined #gluster
05:12 theron joined #gluster
05:12 Manikandan joined #gluster
05:13 pppp joined #gluster
05:15 karnan joined #gluster
05:25 DV joined #gluster
05:30 Apeksha joined #gluster
05:34 gowtham joined #gluster
05:35 gowtham na do you have any patch related to task collection
05:35 gowtham in mongo db
05:36 hgowtham joined #gluster
05:37 DV joined #gluster
05:39 nehar joined #gluster
05:45 kshlm joined #gluster
05:46 Saravanakmr joined #gluster
05:50 vmallika joined #gluster
05:51 ggarg joined #gluster
05:55 Saravanakmr joined #gluster
06:00 ramky joined #gluster
06:01 haomaiwa_ joined #gluster
06:02 kotreshhr joined #gluster
06:02 poornimag joined #gluster
06:04 Bhaskarakiran joined #gluster
06:04 Arrfab joined #gluster
06:05 kovshenin joined #gluster
06:10 F2Knight joined #gluster
06:28 ashiq joined #gluster
06:31 anil joined #gluster
06:39 atalur joined #gluster
06:40 jiffin joined #gluster
06:43 ayma joined #gluster
07:00 DV joined #gluster
07:01 haomaiwa_ joined #gluster
07:08 jtux joined #gluster
07:11 xavih joined #gluster
07:15 marbu joined #gluster
07:16 nishanth joined #gluster
07:18 anil joined #gluster
07:23 karnan joined #gluster
07:24 natarej joined #gluster
07:30 mhulsman joined #gluster
07:30 kdhananjay joined #gluster
07:31 mhulsman joined #gluster
07:33 bhuddah joined #gluster
07:36 [Enrico] joined #gluster
07:47 aravindavk joined #gluster
07:52 EinstCra_ joined #gluster
07:55 DV joined #gluster
07:56 gowtham joined #gluster
07:56 hchiramm joined #gluster
08:01 haomaiwa_ joined #gluster
08:06 EinstCrazy joined #gluster
08:09 SOLDIERz joined #gluster
08:10 DV__ joined #gluster
08:11 [Enrico] joined #gluster
08:14 siel joined #gluster
08:15 Chinorro joined #gluster
08:20 DV joined #gluster
08:28 Chinorro joined #gluster
08:31 ivan_rossi joined #gluster
08:38 ctria joined #gluster
08:38 aravindavk joined #gluster
08:40 taavida1 joined #gluster
08:49 haomaiwa_ joined #gluster
08:59 spalai joined #gluster
08:59 jww joined #gluster
08:59 jww Hello.
08:59 glusterbot jww: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
08:59 jri joined #gluster
09:00 jww is it possible to connect with a gluster client 3.5 to a gluster server 3.2.2 ?
09:01 7GHAAFP2C joined #gluster
09:07 ggarg joined #gluster
09:08 Slashman joined #gluster
09:10 hchiramm joined #gluster
09:12 muneerse joined #gluster
09:19 EinstCrazy joined #gluster
09:25 deniszh joined #gluster
09:34 jiffin1 joined #gluster
09:45 [Enrico] joined #gluster
09:46 anil joined #gluster
09:52 robb_nl joined #gluster
09:56 haomaiwa_ joined #gluster
10:01 haomaiwa_ joined #gluster
10:05 gem_ joined #gluster
10:09 [Enrico] joined #gluster
10:13 EinstCrazy joined #gluster
10:21 jiffin1 joined #gluster
10:31 madnexus joined #gluster
10:31 itisravi jww: older clients and older servers are not supported.
10:31 itisravi jww: ugg I mean newer clients*
10:32 nbalacha joined #gluster
10:32 madnexus hi guys! having really bad performance issues on a glusterfs new installation using RDMA
10:32 madnexus won't write faster than 8Mb/s
10:33 madnexus centos 7.2 + mellanox fibre cards
10:33 madnexus glusterfs 3.7.8
10:33 madnexus anybody had similar problems here?
10:33 jww itisravi: ok ! thanks for the info.
10:33 itisravi np!
10:35 post-factum madnexus: https://bugzilla.redhat.co​m/show_bug.cgi?id=1309462
10:35 glusterbot Bug 1309462: low, unspecified, ---, ravishankar, ASSIGNED , Upgrade from 3.7.6 to 3.7.8 causes massive drop in write performance.  Fresh install of 3.7.8 also has low write performance
10:38 madnexus oh, thanks very much guys... was having a headache here with this
10:38 Javezim joined #gluster
10:39 post-factum madnexus: http://review.gluster.org/#/c/13540/ works for me
10:39 glusterbot Title: Gerrit Code Review (at review.gluster.org)
10:39 post-factum not sure if that is *full* solution for the issue, but it works
10:39 post-factum poornimag: ^^
10:40 madnexus thanks mate, going to check that right now
10:40 Javezim Hey Everyone, So have a bit of an issue. We are backing up to a gluster cluster using a product called Storagecraft Shadowprotect. We share the Folder that it's backing up to via the VFS Samba Module for Gluster. A secondary product Storagecraft Imagemanager is designed to Consolidate the files that are made for shadowprotect, however it keeps reporting the images as Error -31 Device not found. It seems to me like the VFS Samba is not commun
10:40 Javezim icating with gluster or is dropping off. Does anyone know of any tweaks that can be implemented maybe in the gluster options or smb.conf to better gluster working with Samba?
10:42 ggarg joined #gluster
10:44 madnexus post-factum: do you recommend using an older version? seems like this is not the best solution for machines on production
10:46 madnexus gluster volume set VOLUME performance.write-behind off seems to improve the speed back to "normal"
10:57 [Enrico] joined #gluster
10:58 anti[Enrico] joined #gluster
11:01 haomaiwa_ joined #gluster
11:08 aravindavk joined #gluster
11:08 kshlm joined #gluster
11:08 itisravi_ joined #gluster
11:08 ppai joined #gluster
11:08 jiffin joined #gluster
11:09 Manikandan joined #gluster
11:09 pppp joined #gluster
11:09 Bhaskarakiran joined #gluster
11:09 atalur joined #gluster
11:09 gem joined #gluster
11:09 poornimag joined #gluster
11:09 rjoseph joined #gluster
11:09 msvbhat joined #gluster
11:09 lalatenduM joined #gluster
11:09 shubhendu joined #gluster
11:09 nishanth joined #gluster
11:09 atinm joined #gluster
11:09 ggarg joined #gluster
11:09 shruti joined #gluster
11:10 ramky joined #gluster
11:10 RameshN joined #gluster
11:11 hchiramm joined #gluster
11:11 karthikfff joined #gluster
11:12 post-factum madnexus:  I use 3.7.6 + cherry-picked patches to fix memory leaks and some crashes
11:13 ndarshan joined #gluster
11:14 madnexus I see.. do you use gluster on RedHat/CentOS?
11:16 post-factum centos 7
11:17 post-factum i have rpms built, if necessary, you may grab them
11:17 hackman joined #gluster
11:23 [diablo] joined #gluster
11:24 anoopcs Javezim, Linux client or Windows client?
11:24 Javezim anoopcs Windows Client
11:25 anoopcs Javezim, Can you please share you gluster-volume share section in smb.conf?
11:26 anoopcs You can use termbin
11:26 anoopcs http://termbin.com/
11:26 glusterbot Title: termbin.com - terminal pastebin (at termbin.com)
11:27 Javezim wide links = no
11:27 Javezim writeable = yes
11:27 Javezim path = /xxxxxxxxxxxx
11:27 Javezim force user = root
11:27 Javezim force group = root
11:27 RameshN joined #gluster
11:27 Javezim public = yes
11:27 Javezim guest ok = yes
11:27 Javezim create mode = 660
11:27 Javezim directory mode = 770
11:27 Javezim kernel share modes = No
11:27 Javezim vfs objects = recycle glusterfs aio_pthread
11:27 Javezim glusterfs:volfile_server = localhost
11:27 Javezim glusterfs:volume = gv0mel
11:27 Javezim glusterfs:logfile = /var/log/glusterfs/glusterfs-xxxxxxxxxx.log
11:27 Javezim #glusterfs:loglevel = 10
11:27 Javezim recycle:repository = .recycle
11:27 Javezim recycle:keeptree = yes
11:27 Javezim recycle:versions = yes
11:27 Javezim strict locking = no
11:28 anoopcs Javezim, Please use some paste service as I mentioned before for further sharing of info.
11:29 post-factum Javezim: @paste
11:29 post-factum @paste
11:29 glusterbot post-factum: For a simple way to paste output, install netcat (if it's not already) and pipe your output like: | nc termbin.com 9999
11:31 Javezim nc
11:32 anoopcs Javezim, It's not recommended to use other vfs modules along with glusterfs
11:32 anoopcs Javezim, aio_pthread will have not effect for sure.
11:33 Javezim So with a folder within the glusterfs cluster, how would you share it?
11:34 atalur joined #gluster
11:36 anoopcs Javezim, and glusterfs itself provides a recycle bin facility, in case you need that support.
11:36 atalur joined #gluster
11:36 rastar joined #gluster
11:36 anoopcs Javezim, Regarding sub-directory share
11:39 kotreshhr joined #gluster
11:41 ggarg joined #gluster
11:43 madnexus post-factum: where can I get those rpms? thanks!
11:44 anoopcs Javezim, you need to keep path as / itself.
11:45 Javezim How do you mean?
11:46 sebamontini joined #gluster
11:46 anoopcs Javezim, You can directly access the sub-directory as \\server-name\gluster-share-name\directory
11:46 anoopcs on windows explorer
11:47 Javezim So you reckon don't use Samba at all?
11:47 johnmilton joined #gluster
11:47 anoopcs Javezim, I don't get that.
11:49 anoopcs Javezim, a simple gluster share section will look something like this
11:50 anoopcs http://ur1.ca/olie9
11:50 glusterbot Title: #332842 Fedora Project Pastebin (at ur1.ca)
11:50 anoopcs Javezim, ^^
11:50 Javezim Wow
11:51 Javezim Okay a lot different
11:51 anoopcs Javezim, Of course you can add more share specific
11:51 Javezim So on ours
11:51 Javezim What do you reckon is the main issue?
11:51 anoopcs parameters but make sure that you don't mess up path and vfs parameters
11:52 anoopcs Javezim, Coming to your issues... Were you able to see any error messages from Samba or glusterfs logs?
11:52 spalai joined #gluster
11:53 Javezim Nothing solid no
11:53 Javezim The only error was coming from the client
11:53 Javezim saying that the device was no longer respondng
11:53 bluenemo joined #gluster
11:53 Javezim It had network connectivity though
11:54 Javezim so the only thing we could think is samba was dropping or gluster was
11:54 anoopcs Javezim, gluster cluster is in healthy state or not? Try gluster volume status <volname> output to verify the same
11:55 EinstCrazy joined #gluster
11:56 Javezim Hmm two of the Bricks show as Online = N
11:57 Javezim What else would I be looking for?
11:58 anoopcs Javezim, If those two are the only bricks then they are good.
11:59 post-factum madnexus: wait a minute
11:59 anoopcs Basically all bricks corresponding to that volume should be online.
11:59 Javezim [vault]
11:59 Javezim comment = GlusterFS volume share
11:59 Javezim path = /vault
11:59 Javezim guest ok = yes
11:59 Javezim read only = no
11:59 Javezim kernel share modes = no
11:59 Javezim vfs objects = glusterfs
12:00 Javezim glusterfs:loglevel = 7
12:00 Javezim glusterfs:logfile = /usr/local/var/log/samba/glusterfs-vol.%M.log
12:00 Javezim glusterfs:volume = vol
12:00 Javezim glusterfs:volfile_server = 192.168.1.100
12:00 anoopcs Javezim, Sorryy...
12:00 Javezim Ok I've got this setup in the Smb.conf file
12:00 Javezim But the share isn't working
12:00 anoopcs Javezim, Online = N means bricks are not online
12:01 anoopcs Javezim, Try stop and start of the volume
12:01 anoopcs Javezim, Brick status must be Y under online column.
12:01 haomaiwa_ joined #gluster
12:03 anoopcs Javezim, Please use termbin.com or other paste service so that you don't flood the channel with such info.
12:05 Javezim My bad
12:05 Javezim the glusterfs:volume = x
12:05 Javezim That needs to be the volume that was setup?
12:05 Javezim ie in my case gv0
12:05 anoopcs yes..yes
12:05 atinm joined #gluster
12:05 anoopcs volume name
12:05 Javezim can see the share but still can't access it from Win Client
12:05 Javezim See any reason why?
12:05 anoopcs Javezim, What do you mean "I can see"?
12:06 Javezim So if I browse to \\192.168.1.100\
12:06 Javezim the folder "Vault" is there
12:06 Javezim But cannot access
12:06 anoopcs Javezim, What's the error?
12:06 post-factum madnexus: https://dl.dropboxusercontent.com/u/7​541592/glusterfs-3.7.6%2Bpatches.tar
12:07 post-factum madnexus: x86_64, el7
12:07 anoopcs Javezim, Assuming your glusterfs bricks are online, it should be fine.
12:07 Javezim Windows cannot access \\192.168.1.100\vault\
12:09 anoopcs Javezim, Hmm.
12:09 anoopcs Javezim, path = /
12:09 anoopcs Javezim, and not /vault
12:09 anoopcs Javezim, Can you make that change and try again?
12:12 RameshN joined #gluster
12:14 Javezim Still no luck
12:15 theron joined #gluster
12:15 anoopcs Javezim, Ok. 192.168.1.100..Is that the ip you used to create gluster volume?
12:15 Javezim Yes
12:16 anoopcs Javezim, Whether Samba is running on the same server where glusterd is running?
12:16 Javezim Yes it is
12:17 madnexus post-factum: great! thanks very much :)
12:18 anoopcs Javezim, And you don't see any Samba errors in samba logs or any errors in /usr/local/var/log/samba/glusterfs-%M.log?
12:20 pppp joined #gluster
12:20 anoopcs Javezim, Can you please share me the output of following commands using fpaste.org?
12:20 nehar joined #gluster
12:21 post-factum madnexus: i need some feedback from you
12:23 Javezim http://fpaste.org/332871/14570077/
12:23 glusterbot Title: #332871 Fedora Project Pastebin (at fpaste.org)
12:24 Wizek joined #gluster
12:24 madnexus post-factum: using this compile of yours?
12:25 haomaiwa_ joined #gluster
12:25 sebamontini joined #gluster
12:25 Javezim anoopcs, what's this all about ->   smbd_vfs_init: vfs_init_custom failed for glusterfs
12:26 madnexus post-factum: also found mounting the brick (fstab, _netdev) on boot does not work... just wondering if you also found similar issues with thisot sure if you found similar issues
12:29 madnexus post-factum: just wondering if you also found similar issues with this *
12:29 anoopcs Javezim, Ah...There it is
12:29 kshlm Javezim, line 12 says that the vfs-gluster module isn't available.
12:30 anoopcs Javezim, Output of rpm -qa | grep samba
12:30 anoopcs kshlm, You said it.
12:30 Javezim Nothing comes of that
12:31 anoopcs Javezim, What's your server OS?
12:31 kshlm Javezim, you need to install the 'samba-vfs-glusterfs' module
12:31 kshlm s/module/package/
12:31 glusterbot What kshlm meant to say was: Javezim, you need to install the 'samba-vfs-glusterfs' package
12:32 Javezim 14.04.3
12:32 Javezim Ubuntu
12:32 anoopcs Javezim, Then use
12:33 anoopcs apt --installed list | grep "gluster\|samba"
12:34 Javezim http://fpaste.org/332877/14570084/
12:34 glusterbot Title: #332877 Fedora Project Pastebin (at fpaste.org)
12:36 muneerse joined #gluster
12:36 kanagaraj joined #gluster
12:36 anoopcs Javezim, As kshlm said you are missing samba-vfs-glusterfs package and I am trying to figure out how to install it.
12:37 * anoopcs is not at all familiar with apt and deb packages
12:37 anoopcs Javezim, I think you need to install some special PPA (Yeah..ppa) for samba-vfs-glusterfs
12:37 Javezim ppa:monotek/samba-vfs-glusterfs-3.6
12:37 Javezim https://launchpad.net/~monotek/+arc​hive/ubuntu/samba-vfs-glusterfs-3.6
12:38 glusterbot Title: samba-vfs-glusterfs-3.6 : André Bauer (at launchpad.net)
12:39 anoopcs Javezim, Yeah,..I think so.
12:39 kotreshhr joined #gluster
12:39 Javezim yolo
12:39 Wizek joined #gluster
12:41 anoopcs Javezim, I don't know whether that repo itself gives out other glusterfs packages.. If so then you may get some conflicts.
12:41 anoopcs s/repo/PPA
12:41 anoopcs s/repo/PPA/
12:41 glusterbot What anoopcs meant to say was: Javezim, I don't know whether that PPA itself gives out other glusterfs packages.. If so then you may get some conflicts.
12:42 * anoopcs is used to rpm style :-P
12:46 Javezim Hmm
12:46 Javezim Yeah I can't seem to get it
12:49 Javezim actually it now shows
12:49 Javezim samba-vfs-modules/trusty-u​pdates,trusty-security,now 2:4.1.6+dfsg-1ubuntu2.14.04.12 amd64 [installed]
12:51 anoopcs Javezim, Still not installed, I guess.
12:51 taavida1 joined #gluster
12:51 Javezim Nope
12:51 Javezim hmm
12:52 post-factum madnexus: yep, using my rpms
12:52 baojg joined #gluster
12:52 post-factum madnexus: _netdev work for me if mount happens on 3rd-party client and not on node itself
12:52 post-factum madnexus: use /etc/rc.local :)
12:54 anoopcs Javezim, Are you familiar with manually installing PPAs, update and install packages?
12:54 Javezim Yeah is usually straight forwrd
12:54 Javezim this one however doesn't seem to be installing
12:55 Javezim anyway I'll continue on this tomorrow
12:55 Javezim thanks for your help
12:55 Javezim catchas
12:56 haomaiwa_ joined #gluster
12:59 haomaiwang joined #gluster
12:59 DV joined #gluster
13:00 mbukatov joined #gluster
13:01 haomaiwang joined #gluster
13:04 madnexus post-factum: well, that is a very hacky way and it won't work as rc.local is executed before all the stack is ready....
13:05 post-factum madnexus: you may hack further and stick to some while+pgrep construction :)
13:06 post-factum madnexus: i haven't seen any remedy, though
13:06 madnexus post-factum: yeah, was trying to avoid creating a script just for this but seems like this could be the only way
13:09 nehar joined #gluster
13:10 post-factum madnexus: well, another trick you could try is autofs
13:10 post-factum madnexus: _netdev,x-systemd.automount
13:10 haomaiwang joined #gluster
13:11 post-factum it solved mounting cifs share form me on arch
13:11 post-factum *for
13:11 nehar_ joined #gluster
13:15 RameshN joined #gluster
13:15 nishanth joined #gluster
13:16 unclemarc joined #gluster
13:17 shubhendu joined #gluster
13:20 poornimag joined #gluster
13:23 haomaiwang joined #gluster
13:23 DV joined #gluster
13:24 s-hell left #gluster
13:27 mhulsman1 joined #gluster
13:33 madnexus post-factum: thanks, will give that a try as well!
13:35 kanagaraj joined #gluster
13:39 shaunm joined #gluster
13:43 sebamontini joined #gluster
13:47 pjrebollo joined #gluster
13:49 spalai left #gluster
13:55 shyam joined #gluster
13:56 EinstCrazy joined #gluster
14:03 BogdanR_ joined #gluster
14:03 BogdanR_ Hello
14:03 glusterbot BogdanR_: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
14:03 mhulsman joined #gluster
14:04 BogdanR_ How can I see the current cache-size setting from my volumes?
14:04 marbu joined #gluster
14:06 mhulsman1 joined #gluster
14:06 kdhananjay joined #gluster
14:19 kshlm joined #gluster
14:20 post-factum BogdanR_: gluster volume get VOLUMENAME performance.cache-size
14:21 haomaiwa_ joined #gluster
14:23 pur joined #gluster
14:25 theron joined #gluster
14:28 taavida1 how can I recover from split-brain situation, where trusted.afr for a replicated volume is only set on one of two bricks? Should this be resolved by removing trusterd.afr ext.Attr and do a heal afterwards?
14:31 taavida1 I did read the split-brain wiki but from the example it's not clear to me what would be the best approach if there's no trusted.afr  on one of the bricks for the volume
14:33 BogdanR_ post-factum: I have 3.5.2. Is this option available in my version because I get an error? (unrecognized word: get)
14:34 post-factum BogdanR_: dunno, sorry. I'm on 3.7.x branch
14:34 BogdanR_ post-factum: What would you say is the most important performance optimization would I could do on a system which does a fair amount of read IO ?
14:35 skylar joined #gluster
14:36 mhulsman2 joined #gluster
14:42 hamiller joined #gluster
14:45 Chinorro joined #gluster
14:45 post-factum BogdanR_: big files, small files, random read, linear read?
14:51 haomaiwa_ joined #gluster
14:52 Gaurav_ joined #gluster
14:52 GluserUser111 joined #gluster
14:53 theron joined #gluster
14:53 GluserUser111 Hey someone there
14:54 spalai joined #gluster
14:58 nishanth joined #gluster
14:59 nishanth joined #gluster
15:01 haomaiwa_ joined #gluster
15:03 haomaiwa_ joined #gluster
15:04 coredump joined #gluster
15:04 haomaiwa_ joined #gluster
15:04 theron joined #gluster
15:08 EinstCrazy joined #gluster
15:15 robb_nl joined #gluster
15:23 kotreshhr left #gluster
15:24 BogdanR_ post-factum: small files, random read
15:24 shlant joined #gluster
15:27 amye joined #gluster
15:27 Gaurav_ joined #gluster
15:29 spalai joined #gluster
15:32 farhorizon joined #gluster
15:33 post-factum BogdanR_: first of all, consider not using more than 100–300 files per folder
15:33 post-factum BogdanR_: then, it is a good idea to keep tracking those files somewhere in database
15:34 post-factum anyway, i do not know what exactly are those files
15:34 mhulsman joined #gluster
15:35 kdhananjay joined #gluster
15:36 ddayal joined #gluster
15:37 jhyland joined #gluster
15:38 jhyland joined #gluster
15:41 ddayal Hey Guys, I see from https://www.gluster.org/pipermail/gl​uster-users/2016-January/024828.html thread that gluster 3.8 is planned to be released in the end of May or early June. Do we have any specific dates for the release? Also wondering if we are still in the schedule mentioned in the thread?
15:42 glusterbot Title: [Gluster-users] 3.8 Plan changes - proposal (at www.gluster.org)
15:43 rwheeler joined #gluster
15:50 EinstCrazy joined #gluster
15:55 EinstCra_ joined #gluster
15:55 nangthang joined #gluster
15:58 amye_ joined #gluster
15:59 ayma joined #gluster
16:01 EinstCrazy joined #gluster
16:01 mbukatov joined #gluster
16:01 haomaiwang joined #gluster
16:02 BogdanR_ post-factum: I can't ask for programmers to change the way the files are stored because I would only accomplish to be hated for life :)
16:02 BogdanR_ I am curious what config options would help a usage with many small random reads and I would try to optimize on that side.
16:03 BogdanR_ Actually, I am curious if there is one or two who might have the biggest impact because I can shurely try to adjust all of them but I need a starting point.
16:04 post-factum BogdanR_: that's weird. we cooperate with programmers to force them to use gfapi, for example
16:04 post-factum that works much faster than fuse
16:04 shlant hi all. I think I broke something? I'm on ec2. Mounted an ec2 volume @ /data. Created a gluster volume @ /data/gluster. Then I mounted the gluster volume @ /data/gluster/app. If I try and run `ls` in /data/gluster/app, it hangs.
16:05 theron joined #gluster
16:05 shlant I assume that's not expected?
16:05 BogdanR_ post-factum: You have smart programmers. I deal with morrons in this case :)
16:05 BogdanR_ So I need to work around the stupid things they do.
16:11 shlant oh man. tried stop/delete volume and now df -h hangs...
16:17 F2Knight joined #gluster
16:19 ndevos shlant: uh, if you use /data/gluster as a brick, you should not use that directory for anything else, including mounting something else in it
16:21 shlant ndevos: ok yea I thought that may be an issue. So if I want my gluster files to be on my mounted ec2 volume, how should I set that up?
16:21 ndevos not sure why it would cause a hang, but it definitely falls in the "undefined behaviour" category
16:21 shlant I am still a bit confused by volume vs mount
16:21 shlant because my server and client are the same host
16:21 Manikandan joined #gluster
16:22 ndevos shlant: you create a volume with bricks, for example volume "data" with brick <hostname>:/bricks/data
16:22 shlant and then I have to mount that?
16:23 ndevos shlant: after that, you can mount the volume on /data/gluster with: mount -t glusterfs <hostname>:data /data/gluster
16:23 shlant ah
16:23 kanagaraj joined #gluster
16:23 shlant so could I have /data
16:23 shlant and then create a volume at /data/brick
16:23 shlant and mount it at /data/mnt?
16:23 ndevos yes, that would be an option
16:23 shlant ok cool
16:24 shlant so my issue was it was nested
16:24 shlant will try that
16:24 ndevos just be aware that all contents in /data/brick should only be accessed by the gluster processes, not by users or applications
16:25 ndevos everyone/thing should use /data/mnt to create/read/write/... contents
16:25 shlant yea so my files I want to sync are in /data/mnt
16:25 calavera joined #gluster
16:26 shlant so just so I understand, what is /data/brick used for?
16:26 shlant intermediate files while syncing?
16:26 ndevos the /data/brick directory contains the real copy of the files, /data/mnt is only a network mount
16:27 ndevos so when you want to have 2 or more copies of the files, you need to create a "replicate 2" volume
16:27 shlant ok so if I wanted to put a file somewhere to have it synced, I put it in the mount?
16:28 ndevos yes, just write to the /data/mnt, and the mountpoint will connect to the bricks that should have copies of the file
16:28 shlant ndevos: thanks for the clarification
16:29 plarsen joined #gluster
16:29 ndevos you're welcome shlant, just let us know here if you have any other questions
16:30 chirino_m joined #gluster
16:36 bennyturns joined #gluster
16:39 robb_nl joined #gluster
16:45 squizzi_ joined #gluster
16:57 ekuric joined #gluster
17:01 haomaiwang joined #gluster
17:03 papamoose joined #gluster
17:13 Chinorro joined #gluster
17:16 bluenemo joined #gluster
17:18 johnmilton joined #gluster
17:21 bowhunter joined #gluster
17:26 skylar joined #gluster
17:35 pjreboll_ joined #gluster
17:36 pjreboll_ joined #gluster
17:38 shyam joined #gluster
17:41 hchiramm joined #gluster
17:42 B21956 joined #gluster
17:43 jiffin joined #gluster
17:44 togdon joined #gluster
17:49 anil joined #gluster
17:50 sebamontini joined #gluster
18:00 theron joined #gluster
18:01 haomaiwang joined #gluster
18:07 theron joined #gluster
18:10 chirino_m joined #gluster
18:12 jmarley joined #gluster
18:12 shubhendu joined #gluster
18:13 Wizek joined #gluster
18:15 Wizek_ joined #gluster
18:16 ivan_rossi left #gluster
18:24 jmarley_ joined #gluster
18:25 anil joined #gluster
18:31 ayma joined #gluster
18:32 ovaistariq joined #gluster
18:38 foster joined #gluster
18:40 nishanth joined #gluster
18:42 jri joined #gluster
18:44 Wizek joined #gluster
18:48 Wizek_ joined #gluster
18:53 mhulsman joined #gluster
18:57 nathwill joined #gluster
19:01 theron joined #gluster
19:01 64MAAHPUZ joined #gluster
19:02 kenansulayman joined #gluster
19:02 ahino joined #gluster
19:15 ovaistariq joined #gluster
19:20 jmarley joined #gluster
19:20 jmarley_ joined #gluster
19:22 d0nn1e joined #gluster
19:29 penguinRaider joined #gluster
19:49 syadnom guys, I'm needing to script up a stop and start (gluster volume stop gv0 <answer y>; gluster volume start gv0)
19:49 Philambdo joined #gluster
19:49 syadnom but I can't see the <answer y> option documented
19:52 chirino_m joined #gluster
19:52 syadnom nevermind.... --mode=script
19:55 theron joined #gluster
19:57 rwheeler joined #gluster
19:58 robb_nl joined #gluster
20:01 haomaiwa_ joined #gluster
20:17 chirino joined #gluster
20:21 chirino joined #gluster
20:23 ctria joined #gluster
20:45 stuszyns_ joined #gluster
20:51 calavera joined #gluster
20:57 theron joined #gluster
21:01 haomaiwang joined #gluster
21:02 foster joined #gluster
21:05 deniszh joined #gluster
21:05 hamiller joined #gluster
21:06 ovaistariq joined #gluster
21:11 decay joined #gluster
21:11 DV joined #gluster
21:38 ovaistariq joined #gluster
21:38 arcolife joined #gluster
21:38 togdon joined #gluster
21:40 arcolife joined #gluster
21:51 johnmilton joined #gluster
21:56 togdon left #gluster
22:01 haomaiwa_ joined #gluster
22:02 Chinorro joined #gluster
22:03 ovaistariq joined #gluster
22:07 DV joined #gluster
22:08 deniszh joined #gluster
22:12 johnmilton joined #gluster
22:13 ovaistariq joined #gluster
22:15 bitchecker joined #gluster
22:16 deniszh joined #gluster
22:20 badone left #gluster
22:23 deniszh joined #gluster
22:31 ovaistariq joined #gluster
22:37 ctria joined #gluster
22:48 amye joined #gluster
22:50 Arrfab joined #gluster
23:01 haomaiwa_ joined #gluster
23:10 shyam joined #gluster
23:12 shyam joined #gluster
23:14 Ulrar joined #gluster
23:26 ovaistariq joined #gluster
23:26 kenansulayman joined #gluster
23:26 abyss^ joined #gluster
23:30 B21956 joined #gluster
23:33 abyss^ joined #gluster
23:49 kenansul- joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary