Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2016-08-23

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:10 harish joined #gluster
01:04 shdeng joined #gluster
01:29 Lee1092 joined #gluster
01:45 spalai joined #gluster
02:00 hagarth joined #gluster
02:41 Vaelatern joined #gluster
02:45 Javezim Is this command not supported in 3.8? - getfattr -n replica.split-brain-status <path-to-file>
02:48 JoeJulian Javezim: Yes, it's still supported in 3.8.
02:48 Javezim When I run it I am getting - replica.split-brain-status: Operation not supported
02:49 Javezim Running - glusterfs 3.8.2 built on Aug 10 2016 16:09:15
02:51 JoeJulian hmm, maybe you're right. I see the xattr, but it looks like it might only be used internally.
02:52 JoeJulian afr-inode-read.c:afr_handle_heal_xattrs  and afr-common.c:afr_get_split_brain_status
02:53 Javezim Was just reading this - https://gluster.readthedocs.io/en/latest/Troubl​eshooting/heal-info-and-split-brain-resolution/
02:53 glusterbot Title: Split Brain (Auto) - Gluster Docs (at gluster.readthedocs.io)
02:53 Javezim Would seriously come in handy if it worked
02:54 harish joined #gluster
02:54 JoeJulian Nope, I'm wrong. afr_handle_heal_xattrs is called in afr as part of afr_getxattr
02:59 Javezim I find that many of these commands don't actually work :/ Makes healing split brains pretty difficult
02:59 Javezim Eg, the gluster volume heal <VOLNAME> split-brain bigger-file <FILE>
02:59 Javezim Always comes back with - No such file or directory
02:59 Javezim Volume heal failed.
02:59 Javezim When it exists
03:00 JoeJulian The client where you ran that command, you're sure it's running 3.8?
03:00 Javezim I ran it on the server, but yeah its on 3.8
03:00 JoeJulian note I said client, not server. That xattr is handled via the fuse client.
03:01 JoeJulian (or you can get it through libgfapi, but I doubt you're doing it that way.
03:01 JoeJulian Would be kind-of cool, though...
03:02 Gambit15 joined #gluster
03:03 Javezim When I run it from a client I get - The program 'gluster' is currently not installed. You can install it by typing: blah blah blah
03:03 Javezim However it is installed - root@xxxxxxxxx glusterfs -V
03:03 Javezim glusterfs 3.8.2 built on Aug 10 2016 16:09:15
03:03 JoeJulian how do you get that error?
03:04 JoeJulian Because the client shouldn't care if the gluster command is installed.
03:05 JoeJulian Yeah, there's nothing in the source with that error message.
03:06 Javezim So I am running the - gluster volume heal gv0mel split-brain bigger-file, from a glusterfs-client
03:06 Javezim The program 'gluster' is currently not installed. You can install it by typing:
03:06 Javezim apt-get install glusterfs-server
03:06 JoeJulian You were asking about getfattr.
03:07 Javezim Right so the getfattr one just returns back with the filename: Input/output error
03:08 Javezim Now generally we see that when its Metadata split brain
03:08 JoeJulian the getfattr command is done on a client, even if that client is also a server. The fuse mount is considered a client.
03:08 Javezim But isn't this what the command is for
03:08 Vaelatern joined #gluster
03:09 JoeJulian So "getfattr -n replica.split-brain-status $fuse_mounted_file_path" is giving you an i/o error?
03:09 Javezim Yes
03:10 JoeJulian Check the client log. It shouldn't.
03:12 magrawal joined #gluster
03:13 Javezim http://paste.ubuntu.com/23080247/
03:13 glusterbot Title: Ubuntu Pastebin (at paste.ubuntu.com)
03:16 Javezim On each one I run it on I just get that line there
03:17 Vaelatern joined #gluster
03:23 Javezim @JoeJulian, So just ran the command on an entire directory - Eg. getfattr -n replica.split-brain-status /path/to/file/*
03:24 Javezim http://paste.ubuntu.com/23080254/
03:24 glusterbot Title: Ubuntu Pastebin (at paste.ubuntu.com)
03:28 Javezim Hmm actually, That's in GFID Split brain isn't it - http://paste.ubuntu.com/23080259/
03:28 glusterbot Title: Ubuntu Pastebin (at paste.ubuntu.com)
03:29 Javezim How does one fix GFID Mismatches?
03:36 spalai joined #gluster
03:54 kshlm joined #gluster
04:01 atinm joined #gluster
04:17 itisravi joined #gluster
04:17 eightyeight what hashing algorithm is used for bitrot detection?
04:19 shubhendu joined #gluster
04:25 ramky joined #gluster
04:26 jobewan joined #gluster
04:26 jiffin joined #gluster
04:26 eightyeight sha-256?
04:31 nbalacha joined #gluster
04:37 itisravi eightyeight: yes
04:42 Vaelatern joined #gluster
04:44 eightyeight itisravi: thx
04:47 eightyeight is the checksum per file, or per block?
04:48 eightyeight IE: if a glusterfs client requests a 20MB file, running that through SHA-256 first, and checking for integrity, could take a while
04:48 eightyeight although, i guess it needs to take that time, regardless
04:51 eightyeight i'm assuming that redundancy is needed to fix bitrot (replicate or disperse)
04:51 eightyeight and that if the client requests a file that does not match its checksum, the bitrot daemon will look for a god redundant copy, sent that to the client, and use that data to fix the corrupted bits
04:52 eightyeight (similar in design to ZFS)
04:55 rafi joined #gluster
04:56 itisravi The checksum is per file.It is stored an an extended attribute on the file. I don't think the detection happens in real time eveytime a client accsses a file. A daemon periodically crawls throu the files, computes the checksum and compares it with the stored one.
04:57 itisravi If it finds a mismatch, it marks it bad. Subsequent client access for that file will fail.
04:58 itisravi Unless like you noted, you have a replica volume  to serve from the good copy.
05:02 Bhaskarakiran joined #gluster
05:04 sanoj joined #gluster
05:04 eightyeight interesting. a read doesn't trigger an integrity check, eh?
05:06 eightyeight but, for periodic checking, you need to set a `scrub-frequency' (daily, weekly, biwekly, monthly), no?
05:07 itisravi I don't think so, but overclk might be able to confirm, he's one of the bitrot devs.
05:07 itisravi yeah.
05:08 eightyeight ok
05:09 ndarshan joined #gluster
05:09 itisravi eightyeight: I can't find the link to the upstream doc, but the rhgs doc has some information on the parameters: https://access.redhat.com/documentation/en-​US/Red_Hat_Storage/3.1/html/Administration_​Guide/chap-Detecting_Data_Corruption.html
05:09 glusterbot Title: Chapter 20. Detecting Data Corruption with BitRot (at access.redhat.com)
05:10 eightyeight thx
05:11 spalai joined #gluster
05:13 skoduri joined #gluster
05:18 aspandey joined #gluster
05:21 kramdoss_ joined #gluster
05:22 aravindavk joined #gluster
05:28 kukulogy I remove a brick but I'm having an error whenever I tried to heal the volume. 0-nfs: readv on /var/run/gluster/68809f5f5e7​432a8f3f207477e320c80.socket failed (Attribute not found)
05:35 nbalacha joined #gluster
05:38 spalai joined #gluster
05:39 raghug joined #gluster
05:43 kukulogy nvm fixed it!
05:43 kukulogy https://groups.google.com/forum/#!topic/mailing.freebsd.ports-bugs/jIf9250KUgA
05:43 glusterbot Title: Google Groups (at groups.google.com)
05:43 atalur joined #gluster
05:47 aravindavk joined #gluster
05:48 poornima joined #gluster
05:48 poornima_ joined #gluster
05:49 poornima joined #gluster
05:50 Muthu_ joined #gluster
05:51 kdhananjay joined #gluster
05:52 rastar joined #gluster
05:55 Gnomethrower joined #gluster
05:55 mhulsman joined #gluster
05:55 hgowtham joined #gluster
05:56 satya4ever joined #gluster
06:01 aravindavk joined #gluster
06:02 derjohn_mob joined #gluster
06:02 hchiramm joined #gluster
06:04 kotreshhr joined #gluster
06:05 pdrakeweb joined #gluster
06:13 mhulsman joined #gluster
06:18 post-factum JoeJulian: yup, I have some tips, but ben453 is not here :(
06:19 atinm joined #gluster
06:22 Manikandan joined #gluster
06:22 prasanth joined #gluster
06:22 devyani7 joined #gluster
06:23 Saravanakmr joined #gluster
06:24 devyani7 joined #gluster
06:25 itisravi joined #gluster
06:25 jith_ joined #gluster
06:27 overclk eightyeight: ping, checkums are per (full) file -- not blocks. this is one of the things that needs to be improved when a file is sharded..
06:28 jtux joined #gluster
06:28 overclk eightyeight: read does not cross verify checksums on the fly. a file is marked bad when the filesystem is scrubbed.
06:29 karnan joined #gluster
06:30 rastar joined #gluster
06:33 kdhananjay joined #gluster
06:34 kotreshhr joined #gluster
06:36 skoduri joined #gluster
06:36 anil joined #gluster
06:38 spalai joined #gluster
06:39 msvbhat joined #gluster
06:41 jkroon joined #gluster
06:41 pur joined #gluster
06:41 ankitraj joined #gluster
06:44 jith_ hi all, I have configured swift with glusterfs, two machines are used to configure glusterfs volume. And i have mounted the glusterfs volume in third machine(swift). I am facing some error( http://paste.openstack.org/show/562407/) .Pls guide
06:44 glusterbot Title: Paste #562407 | LodgeIt! (at paste.openstack.org)
06:46 ashiq joined #gluster
06:48 jtux joined #gluster
06:52 atalur joined #gluster
06:54 [diablo] joined #gluster
06:54 Gnomethrower joined #gluster
06:55 atinm joined #gluster
07:01 kdhananjay joined #gluster
07:11 jkroon hi all, it's with *some* reservation that I need to state that my most recent "gluster" problems seems to have in fact not been gluster at all, but rather a kernel problem.  ndevos, JoeJulian, kshlm - i think you were all involved in the discussions recently.
07:14 hchiramm joined #gluster
07:15 jkroon i'm not sure what causes it, what I do know is that we saw major disk IO problems on mdadm raid5 from kernels 4.1 onwards (we've got at least one server we dowgraded back to 4.0 and all instability and "deadlocks" or "freezes" went away - discussion on #lvm, suspicion of mdadm refactoring).  all of our servers on gluster is running mdadm raid1.  So it could relate.  we've just downgraded back down to 3.19.5 on these two servers that
07:15 jkroon gave us the most problems and they haven't blipped again since with our distributed LAMP setup on top of glusterfs (on top of ext4 bricks on lvm on mdadm raid1).
07:15 jkroon (MySQL portions runs off of other dedicated servers)
07:15 shubhendu joined #gluster
07:17 jkroon jith_, gluster version?
07:17 jkroon i've seen that a few times long long ago when the client and server got disconnected.
07:17 jkroon could be that glusterd died on the server you're connecting to?
07:20 aravindavk joined #gluster
07:34 jri joined #gluster
07:35 atalur joined #gluster
07:37 kotreshhr joined #gluster
07:44 jith_ jkroon: thanks for the reply
07:44 jith_ jkroon, version is 3.5.2
07:45 jkroon jith_, umount + mount.
07:45 jkroon used to fix it for me.  was caused by glusterd or glusterfsd restarts.
07:45 jkroon i'd recommend upgrading to 3.7.X.
07:45 derjohn_mob joined #gluster
07:49 ivan_rossi joined #gluster
07:49 msvbhat joined #gluster
07:51 masber joined #gluster
07:54 Infinite_ joined #gluster
07:54 Slashman joined #gluster
07:55 Infinite_ Hi there!
07:56 Infinite_ I have a question regarding the migration of a huge number of files and I would greatly appreciate me if anyone could help: I want to copy 1TB of data consisting mostly of files which are around a few MBs or smaller and gluster is slowing down the process immensely. Is there a way to speed the process up, for example by disabling the replication mechanism on the target so that the other server connected to the target server can
08:03 rastar joined #gluster
08:10 fsimonce joined #gluster
08:25 spalai joined #gluster
08:28 harish joined #gluster
08:32 kdhananjay joined #gluster
08:36 ansyeblya joined #gluster
08:37 ansyeblya could someone link a decent article about glusterFS performance tuning. like: there are 'these" params, then do this, that and that. this is how you check current write/read, this is what we advise you to change and check again
08:37 ansyeblya so far I wasnt able to find such one
08:37 ansyeblya they(params) do this, that & that* misstyped
08:40 jri joined #gluster
08:51 jith_ jkroon, thanks, sure
08:51 jith_ umount+mount is for glusterfs volume right?
08:53 jith_ jkroon, pls check this http://paste.openstack.org/show/562425/,
08:53 glusterbot Title: Page Not Found | LodgeIt! (at paste.openstack.org)
08:54 jith__ joined #gluster
08:54 jkroon jith_, gluster volume start ${volname} force
08:54 jiffin ansyeblya: Check this on http://gluster.readthedocs.io/en/latest/Ad​ministrator%20Guide/Performance%20Testing/
08:54 glusterbot Title: Performance Testing - Gluster Docs (at gluster.readthedocs.io)
08:54 jiffin should be helpful
08:54 ankitraj joined #gluster
08:55 jkroon and no, i actually refer to the mount-point, eg I mount gluster volume gl_home on /home, so I'd do umount /home && mount /home - if that failed to UNMOUNT I'd often just do umount -fl ... which really isn't right but since everything on there deadlocks ...
08:55 jith__ jkroon: in that volume status, the brick is not online and it is not showing the corresponding port numbers
08:55 jkroon jith_, that's what the start force attempts to remedy.
08:56 jith__ jkroon, yes i checked, same gvol status
08:56 kdhananjay joined #gluster
08:57 jkroon jith__, have you run "gluster volume start gvol force"
08:57 jkroon are the bricks mounted?
08:58 jith_ yes i run,
08:58 jith_ same status
08:59 jkroon ls -lad /data/gluster/gvol?/brick1/.glusterfs - does this two files on both of the servers?
09:00 jith_ it is mounted, the result of mount command,
09:00 jith_ /dev/mapper/vgglus1-gbrick1 on /data/gluster/gvol0 type xfs (rw,relatime,attr2,nobarrier,inode64,noquota)
09:00 jith_ /dev/mapper/vgglus2-gbrick2 on /data/gluster/gvol1 type xfs (rw,relatime,attr2,nobarrier,inode64,noquota)
09:02 jith_ jkroon, i have two storage vols on one glusterfs server and another two storage vols on another server. from this four bricks i have created distribute-replicate gluster volume
09:02 jkroon jith_, gluster status already told me that :)
09:03 jkroon what I'm trying to determine is whether the .glusterfs folders exist inside of the bricks.
09:03 jith_ jkroon, yes..
09:03 jkroon you sound pretty confident.  so the real question is why won't the brick daemons fire up.
09:03 jith_ ls -lad /data/gluster/gvol0
09:03 jith_ ls: cannot access /data/gluster/gvol0: Input/output error
09:03 arcolife joined #gluster
09:03 jkroon ooh, ok, so that's why.
09:04 jkroon your underlying filesystem is MIA.
09:04 jith_ xfs
09:04 jkroon i see you're using xfs, so until you can ls /data/gluster/gvol[01] successfully on both servers you will need to concentrate on getting those underlying filesystems back up.
09:05 jkroon and i'm not familiar with xfs so can't help you there.  i do know it's a "complicated" beast.
09:05 jkroon jith_, meeting, i might be able to check in again later
09:05 jith_ jkroon, ok sure thanks
09:06 jith_ [2016-08-23 03:27:28.291205] E [client-handshake.c:1760:client_query_portmap_cbk] 0-gvol-client-3: failed to get the port number for remote subvolume. Please run 'gluster volume status' on server to see if brick process is running.
09:06 jith_ [2016-08-23 03:27:28.291228] I [client.c:2229:client_rpc_notify] 0-gvol-client-3: disconnected from 10.10.15.160:24007. Client process will keep trying to connect to glusterd until brick's port is available
09:06 jith_ [2016-08-23 03:27:28.291237] E [afr-common.c:4168:afr_notify] 0-gvol-replicate-1: All subvolumes are down. Going offline until atleast one of them comes back up.
09:07 jith_ jkroon, sure thanks
09:07 jith_ [2016-08-23 03:27:25.033981] E [common-utils.c:117:mkdir_p] 0-: Failed to create directory, possibly some of the components were not directories
09:07 jith_ [2016-08-23 03:27:25.033994] I [mem-pool.c:539:mem_pool_destroy] 0-gvol-changelog: size=108 max=0 total=0
09:07 jith_ [2016-08-23 03:27:25.034001] E [xlator.c:403:xlator_init] 0-gvol-changelog: Initialization of volume 'gvol-changelog' failed, review your volfile again
09:07 jith_ [2016-08-23 03:27:25.034009] E [graph.c:307:glusterfs_graph_init] 0-gvol-changelog: initializing translator failed
09:07 jith_ [2016-08-23 03:27:25.034015] E [graph.c:502:glusterfs_graph_activate] 0-graph: init failed
09:07 jith_ [2016-08-23 03:27:25.034299] W [glusterfsd.c:1095:cleanup_and_exit] (-->/usr/lib/x86_64-linux-gnu/libgfr​pc.so.0(rpc_clnt_handle_reply+0x90) [0x7fd856db0ae0] (-->/usr/sbin/glusterfsd(mgmt_getspec_cbk+0x32e) [0x40bc8e] (-->/usr/sbin/glusterfsd(glu​sterfs_process_volfp+0x113) [0x407a73]))) 0-: received signum (0), shutting down
09:07 glusterbot jith_: ('s karma is now -153
09:07 glusterbot jith_: ('s karma is now -154
09:07 glusterbot jith_: ('s karma is now -155
09:07 jith_ above is brick error
09:08 atalur_ joined #gluster
09:12 arcolife joined #gluster
09:12 archit_ joined #gluster
09:33 rafi1 joined #gluster
09:40 jri joined #gluster
09:43 ashiq joined #gluster
09:47 itisravi_ joined #gluster
09:48 GoKule joined #gluster
09:48 nbalacha joined #gluster
09:50 GoKule Hello! After upgrading gluster from 3.8.1 to 3.8.2 (Linux compile), I notice that when I type gluster volume heal <volname> info, i'm getting same 70 files in info on one brick. There are gfid files.
09:51 GoKule Is there any solution to heal those?
09:51 GoKule In heal statistics, failed entry record is 70.
09:54 GoKule info split-brain for both brick is 0
10:07 kotreshhr joined #gluster
10:11 David_Varghese joined #gluster
10:16 anil joined #gluster
10:20 Muthu_ joined #gluster
10:27 derjohn_mob joined #gluster
10:29 ankitraj joined #gluster
10:30 nbalacha joined #gluster
10:31 ashah joined #gluster
10:36 msvbhat joined #gluster
10:43 jri joined #gluster
10:47 msvbhat joined #gluster
10:52 robb_nl joined #gluster
10:52 rafi1 joined #gluster
10:54 kukulogy joined #gluster
10:56 nbalacha joined #gluster
10:57 hackman joined #gluster
11:01 jwd joined #gluster
11:07 Muthu_ joined #gluster
11:07 ashiq joined #gluster
11:10 atalur joined #gluster
11:12 kotreshhr joined #gluster
11:13 kovshenin joined #gluster
11:15 spalai joined #gluster
11:36 karnan joined #gluster
11:37 karnan_ joined #gluster
11:37 karnan joined #gluster
11:38 karnan_ joined #gluster
11:39 karnan joined #gluster
11:53 Muthu_ joined #gluster
11:55 d0nn1e joined #gluster
12:11 dlambrig joined #gluster
12:19 skoduri joined #gluster
12:24 johnmilton joined #gluster
12:29 johnmilton joined #gluster
12:29 shyam joined #gluster
12:31 itisravi joined #gluster
12:44 unclemarc joined #gluster
12:48 dlambrig joined #gluster
12:50 derjohn_mob joined #gluster
12:59 poornima joined #gluster
13:14 dlambrig joined #gluster
13:18 atinm joined #gluster
13:19 kkeithley joined #gluster
13:19 plarsen joined #gluster
13:21 julim joined #gluster
13:26 poornima joined #gluster
13:27 jith_ joined #gluster
13:28 kshlm joined #gluster
13:33 jiffin1 joined #gluster
13:33 skylar joined #gluster
13:35 hfourie joined #gluster
13:37 hfourie hi all, can someone shed some light on brick sizing please?
13:38 dnunez joined #gluster
13:39 post-factum hfourie: what?
13:39 ben453 joined #gluster
13:39 hfourie hi, thanks. what is the max bricks you should have on a node ?
13:40 shubhendu joined #gluster
13:42 ben453 Hi, I'm trying to build a gluster rpm from source on Ubuntu, but I'm getting dependency errors when I run "make glusterrpms" becuase it's using yum to determine what dependencies I have installed. Is there an easy way around this?
13:44 ben453 I'm following these instructions to build gluster on Ubuntu and then try and make an RPM: https://gluster.readthedocs.io/en/latest/​Developer-guide/Building-GlusterFS/#Build Requirements
13:44 glusterbot Title: Build and Install GlusterFS - Gluster Docs (at gluster.readthedocs.io)
13:44 eightyeight overclk: thx
13:44 misc ben453: mhh, using a docker container or lxc ?
13:45 ben453 I'm running Ubuntu in VirtualBox
13:46 ben453 It gets all the way to "rpmbuild --define '_topdir /home/ben/repo/glusterfs/extras/LinuxRPM/rpmbuild' -bb rpmbuild/SPECS/glusterfs.spec" and then fails due to missing dependencies
13:46 raghu joined #gluster
13:46 aravindavk joined #gluster
13:49 post-factum hfourie: afaik, max bricks count is limited to available sockets and node memory
13:50 post-factum hfourie: tcp-sockets, ofr example
13:50 jiffin1 joined #gluster
13:50 post-factum ben453: consider installing required deps with yum builddep
13:51 post-factum ben453: but first clarify how ubuntu could deal with rpmbuild
13:53 kotreshhr left #gluster
13:58 hfourie hi tcp
13:58 robb_nl joined #gluster
13:59 ben453 post-factum: I'll try that. This is my first time trying to create an RPM. Also with regards to Ubuntu dealing with rpmbuild, it looks like it's installed on 16.04. Running "rpmbuild --version" gives: RPM version 4.12.0.1
13:59 hfourie <post-factum> tcp
14:00 kpease joined #gluster
14:01 ben453 But honestly I'm just trying to apply this patch: http://review.gluster.org/#/c/15289/ to a CentOS server I have running gluster
14:01 glusterbot Title: Gerrit Code Review (at review.gluster.org)
14:01 ben453 which is why I need to build the RPM
14:05 squizzi joined #gluster
14:06 archit_ joined #gluster
14:06 neofob joined #gluster
14:13 pur joined #gluster
14:16 shyam joined #gluster
14:27 Gambit15 Hey guys, anyone here got any experience with sharding & poor performance?
14:27 Gambit15 With replica 3 arbiter 1 across 4 servers, I'm averaging 160Mbps, but as soon as I enable sharding at 512MB, it slows down to 512Kbps
14:29 Gambit15 It's not a problem with resources, as each node currently has 16GB RAM & 2 quad-core Xeons, and there's nothing else running on them. Just a base install + gluster
14:31 Bhaskarakiran joined #gluster
14:31 sage joined #gluster
14:33 jri joined #gluster
14:35 Lee1092 joined #gluster
14:36 hagarth Gambit15: you may want to drop a note on gluster-users. there are a bunch of folks using sharding in production and might have good recommendations for you.
14:38 prasanth joined #gluster
14:39 Gambit15 Coolio, cheers hagarth
14:43 Gambit15 hagarth, is there a channel, or just the mailing list?
14:44 hagarth Gambit15: mailing list
14:51 atalur joined #gluster
14:53 atalur left #gluster
14:57 kramdoss_ joined #gluster
15:03 Manikandan joined #gluster
15:04 atinm joined #gluster
15:06 congpine joined #gluster
15:06 wushudoin joined #gluster
15:08 ankitraj joined #gluster
15:15 nbalacha joined #gluster
15:18 ben453 joined #gluster
15:27 hagarth joined #gluster
15:29 Manikandan_ joined #gluster
15:30 kpease joined #gluster
15:34 kpease joined #gluster
15:39 kpease joined #gluster
15:43 msvbhat joined #gluster
15:52 magrawal joined #gluster
16:07 Javezim joined #gluster
16:13 shyam joined #gluster
16:16 jobewan joined #gluster
16:24 atalur joined #gluster
16:25 jri joined #gluster
16:30 spalai joined #gluster
16:41 rafi joined #gluster
16:51 rafi joined #gluster
16:56 shubhendu joined #gluster
16:59 atinm joined #gluster
17:02 Manikandan_ joined #gluster
17:07 jiffin joined #gluster
17:11 msvbhat joined #gluster
17:16 kovshenin joined #gluster
17:19 dlambrig joined #gluster
17:20 ivan_rossi left #gluster
17:28 hchiramm joined #gluster
18:16 rastar joined #gluster
18:30 jiffin joined #gluster
18:30 kovshenin joined #gluster
18:59 skylar1 joined #gluster
19:00 David_Varghese joined #gluster
19:00 mhulsman joined #gluster
19:03 dlambrig joined #gluster
19:04 nathwill joined #gluster
19:06 hackman joined #gluster
19:30 karnan joined #gluster
19:42 mhulsman joined #gluster
19:43 johnmilton joined #gluster
19:48 nathwill joined #gluster
19:51 ttkg joined #gluster
20:52 nathwill joined #gluster
20:57 mckornfield joined #gluster
21:02 bkolden joined #gluster
21:08 mckornfield hello! I'm trying to git clone a change from review.gluster.org. Is the best method using the download links and clipboard shortcut at the top right? I get an unable to connect error when cloning
21:48 dlambrig joined #gluster
22:41 hagarth joined #gluster
22:53 Vaelatern joined #gluster
23:05 dlambrig joined #gluster
23:10 nathwill joined #gluster
23:29 masber hi
23:29 glusterbot masber: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary