Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2016-11-23

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:15 Caveat4U joined #gluster
00:15 Caveat4U joined #gluster
00:16 Caveat4U joined #gluster
01:10 ankitraj joined #gluster
01:32 shdeng joined #gluster
01:47 virusuy joined #gluster
01:47 virusuy joined #gluster
01:47 susant joined #gluster
01:49 hackman joined #gluster
01:51 haomaiwang joined #gluster
02:10 ankitraj joined #gluster
02:37 haomaiwang joined #gluster
02:47 ilbot3 joined #gluster
02:47 Topic for #gluster is now Gluster Community - http://gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
02:55 derjohn_mobi joined #gluster
03:00 vbellur joined #gluster
03:07 B21956 joined #gluster
03:11 Lee1092 joined #gluster
03:13 nishanth joined #gluster
03:15 magrawal joined #gluster
03:17 kramdoss_ joined #gluster
03:21 barajasfab joined #gluster
03:24 riyas joined #gluster
03:25 daMaestro joined #gluster
03:27 minimicro joined #gluster
03:27 minimicro I'm running into some issues trying to get gluster working with RDMA on Ubuntu 14.04
03:28 minimicro https://thepasteb.in/p/y8h6GxgvgZLSO
03:28 glusterbot Title: /TheP(aste)?B\.in/i - For all your pasting needs! (at thepasteb.in)
03:28 minimicro The machines can all see each other, I've set up IPOIB on the 2 machines, and can ping back and forth
03:33 Gnomethrower joined #gluster
03:43 minimicro the docs are a bit unclear
03:43 minimicro is RDMA support more or less a legacy feature?
03:43 minimicro i.e. it used to work and now probably doesn't
03:45 atinm joined #gluster
04:07 nbalacha joined #gluster
04:08 jiffin joined #gluster
04:29 Shu6h3ndu joined #gluster
04:32 sbulage joined #gluster
04:33 Shu6h3ndu joined #gluster
04:42 buvanesh_kumar joined #gluster
04:45 ankitraj joined #gluster
04:53 nbalacha joined #gluster
04:55 sanoj joined #gluster
05:12 Prasad joined #gluster
05:17 ppai joined #gluster
05:28 karthik_us joined #gluster
05:30 ndarshan joined #gluster
05:37 riyas joined #gluster
05:38 hgowtham joined #gluster
05:44 skoduri joined #gluster
05:46 RameshN joined #gluster
05:50 rafi joined #gluster
05:51 susant joined #gluster
05:56 Karan joined #gluster
05:59 kotreshhr joined #gluster
06:01 nishanth joined #gluster
06:03 Saravanakmr joined #gluster
06:05 nbalacha joined #gluster
06:08 apandey joined #gluster
06:16 msvbhat joined #gluster
06:20 ashiq joined #gluster
06:21 itisravi joined #gluster
06:24 sbulage joined #gluster
06:24 Philambdo joined #gluster
06:27 Muthu joined #gluster
06:35 snixor joined #gluster
06:37 prth joined #gluster
06:40 suliba joined #gluster
06:41 jtux joined #gluster
06:42 kdhananjay joined #gluster
06:55 nbalacha joined #gluster
07:20 nbalacha joined #gluster
07:30 devyani7 joined #gluster
07:32 k4n0 joined #gluster
07:41 jtux joined #gluster
07:41 aravindavk joined #gluster
07:42 jkroon joined #gluster
07:47 karthik_us joined #gluster
07:49 witsches joined #gluster
08:01 [diablo] joined #gluster
08:07 ju5t joined #gluster
08:12 jri joined #gluster
08:12 Philambdo1 joined #gluster
08:27 fsimonce joined #gluster
08:28 Gnomethrower joined #gluster
08:28 ju5t joined #gluster
08:31 witsches joined #gluster
08:41 martin_pb joined #gluster
08:42 derjohn_mobi joined #gluster
08:44 hackman joined #gluster
08:46 Philambdo joined #gluster
08:52 karthik_us joined #gluster
08:55 rwheeler joined #gluster
08:56 hgowtham joined #gluster
09:01 jiffin1 joined #gluster
09:08 ju5t joined #gluster
09:15 prasanth joined #gluster
09:21 Slashman joined #gluster
09:25 kotreshhr left #gluster
09:29 witsches joined #gluster
09:30 jiffin1 joined #gluster
09:32 nbalacha joined #gluster
09:32 msvbhat joined #gluster
09:33 skoduri_ joined #gluster
09:36 hgowtham joined #gluster
09:42 panina joined #gluster
09:50 mhulsman joined #gluster
09:52 jtux joined #gluster
09:57 Peppard joined #gluster
10:00 derjohn_mob joined #gluster
10:02 witsches joined #gluster
10:13 ndarshan joined #gluster
10:19 rastar joined #gluster
10:19 opthomasprime joined #gluster
10:19 opthomasprime left #gluster
10:20 opthomasprime joined #gluster
10:33 skoduri joined #gluster
10:40 zhangjn joined #gluster
10:41 zhangjn joined #gluster
10:41 zhangjn joined #gluster
10:43 zhangjn joined #gluster
10:44 zhangjn joined #gluster
10:44 zhangjn joined #gluster
10:45 witsches joined #gluster
11:01 atinm joined #gluster
11:10 ndarshan joined #gluster
11:19 poornima_ joined #gluster
11:33 elastix joined #gluster
11:34 kramdoss_ joined #gluster
11:50 kshlm Weekly meeting starts in ~10 minutes in #gluster-meeting
12:00 jdarcy joined #gluster
12:08 kdhananjay joined #gluster
12:11 jiffin joined #gluster
12:14 arc0 joined #gluster
12:17 Caveat4U joined #gluster
12:20 Philambdo joined #gluster
12:22 witsches joined #gluster
12:24 haomaiwang joined #gluster
12:25 johnmilton joined #gluster
12:33 susant joined #gluster
12:41 witsches joined #gluster
12:46 haomaiwang joined #gluster
12:51 martin_pb joined #gluster
12:58 arpu joined #gluster
13:00 Philambdo joined #gluster
13:02 Philambdo joined #gluster
13:17 atinm joined #gluster
13:18 guhcampos joined #gluster
13:18 alvinstarr joined #gluster
13:19 jkroon joined #gluster
13:22 msvbhat joined #gluster
13:29 witsches joined #gluster
13:29 ankitraj joined #gluster
13:39 johnmilton joined #gluster
13:47 unclemarc joined #gluster
13:53 Wizek_ joined #gluster
13:55 plarsen joined #gluster
14:03 witsches joined #gluster
14:06 skoduri joined #gluster
14:10 sbulage joined #gluster
14:13 ivan_rossi left #gluster
14:17 mhulsman joined #gluster
14:21 opthomasprime joined #gluster
14:23 skylar joined #gluster
14:27 guhcampos joined #gluster
14:39 annettec joined #gluster
14:39 Lee1092 joined #gluster
14:40 jdarcy joined #gluster
14:41 plarsen joined #gluster
14:48 biodose joined #gluster
14:50 biodose I am having an issue with large files and NFS on gluster.  When I mount the file system and copy large files 3 to 4G it copies, but then hangs at the very end of the copy.  Any ideas?
14:51 cloph likely your local system caching the file to copy, and then waiting for network to drain the sink.... so how does it "hang"?
14:53 biodose I am using sftp to copy the files to the mount point.  sftp get 100% all the way there, and then nothing.  The terminal locks, and I have to reboot the system.  It does not happen when I mount it as a gluster type, only nfs
14:55 biodose I am using Centos 7 with the latest on everything.
14:56 nbalacha joined #gluster
14:56 jiffin biodose: what u mean by NFS on gluster
14:56 jiffin are u using integrated gluster nfs server or the knfs exporting gluster mount
14:56 jiffin ?
14:57 biodose It hangs like this if I use the following command "mount -t nfs Gluster1:/data data"  but not "mount -t glusterfs Gluster1:/data data"  Very strange
14:58 biodose I am using the intergrated nfs.
15:04 ndevos biodose: is Gluster1 the server where you sftp too as well? like a localhost mount?
15:05 biodose yes
15:06 ndevos oh, in that case it could well be a problem where the nfs-mountpoint allocated much memory, and upon the flush/sync gluster/nfs (and the backing filesystem) needs to allocate more memory
15:07 biodose Gluster1 df has :/dev/sda1 /bricks/brick1     /dev/sdb1 /bricks/brick1     then I mount the system using mount -t nfs Gluster1:/data /mnt/data
15:07 kpease joined #gluster
15:07 ndevos this can cause a trigger to free memory from the vfs, inclusing the nfs-mountpoint.... and you're entering a memory-free-before-new-allocations loop right there
15:07 biodose How do I fix it?
15:08 ndevos you should not mount anything over NFS on Gluster servers that also run the Gluster NFS-server
15:08 biodose I have the same issue if I mount it to a different server, but I will try again.
15:08 ndevos that can give you troubles with locking as well as this kind of deadlock
15:09 ndevos yeah, mounting from a different server is less likely to run into this, but if the server that has the nfs-mountpoint also has a brick where data should be written, the same can happen
15:10 biodose The other server does not have any gluster bricks on it.
15:10 ndevos for some reason fuse mounts do not hit this problem as quickly (if at all?), probably because the 'advanced' caching that the NFS-client does
15:11 ndevos no, the server that has the nfs-mount (not the nfs-server) and has bricks could run into these deadlocks
15:13 ndevos its a like: 1. nfs-client needs RAM for flushing caches, 2. server-side (bricks or nfs-server) need to allocate RAM, 3. allocation of memory triggers flushing of NFS-client cache, 4. got to 1
15:13 biodose I will try with the following setup:  Gluster1(Has bricks) Gluster2(Has bricks)   Node1(No bricks). I will mount to Node1
15:14 ndevos right, and if you sftp the file to Node1, that should not trigger this memory-allocation-free deadlock
15:16 biodose I will try this.  The reason that I am looking this route is Ovirt does not like to mount my gluster for some reason.  "Error while executing action:  CAnnot remove Storage Connection. Storage connection parameters are used by the following storage domains: data"
15:16 ankitraj joined #gluster
15:17 ndevos I would not know what the problem is there, you could ask in the ovirt IRC channel or on their mailinglist
15:17 biodose I have...I am hearing crickets.....For now, if I can get NFS to work, that would be fine.
15:17 d0nn1e joined #gluster
15:18 ndevos NFS-mounts on Gluster storage servers is strongly recommended against, locking will not work
15:19 ndevos but, qemu does not use locks, so maybe it is safe in the oVirt case... but you should mount with the "-o nolock" mount option, otherwise mounting may even fail
15:19 atinm joined #gluster
15:20 jtux joined #gluster
15:22 biodose The gluster servers will only be used for storage, and nothing else.  Should I use the ganesha nfs or just the native?  Is there any advantage to v3 or v4?
15:23 ndevos nfs-ganesha is more advanced than gluster/nfs, so it depends a little on the features you need - but nfs-ganesha is in general the advised way, gluster/nfs only gets bugfixes and gets deprecated
15:29 skoduri joined #gluster
15:34 plarsen joined #gluster
15:40 RameshN joined #gluster
15:43 rwheeler joined #gluster
15:49 ashiq joined #gluster
15:51 f0rpaxe joined #gluster
15:55 jkroon joined #gluster
16:02 farhorizon joined #gluster
16:03 wushudoin joined #gluster
16:05 wushudoin joined #gluster
16:06 mhulsman joined #gluster
16:11 nishanth joined #gluster
16:12 ankitraj joined #gluster
16:19 guhcampos joined #gluster
16:34 panina joined #gluster
17:00 hackman joined #gluster
17:11 opthomasprime left #gluster
17:12 Muthu joined #gluster
17:19 Pupeno joined #gluster
17:38 Gnomethrower joined #gluster
17:51 Pupeno joined #gluster
18:11 mhulsman joined #gluster
18:51 dnorman joined #gluster
18:56 rastar joined #gluster
18:57 jiffin joined #gluster
18:59 nishanth joined #gluster
19:07 farhorizon joined #gluster
19:17 jerrcs__ joined #gluster
19:45 panina joined #gluster
19:52 swebb joined #gluster
19:54 squizzi joined #gluster
20:00 farhorizon joined #gluster
20:02 PatNarciso tiering question: is the migration of data from hot-bricks<-->cold-bricks stay on the same server/node (based on its hostname?) OR is it based on the order of the bricks when the tier was added.  My setup is a 2 node distributed volume, each node has 1 cold, 1 hot brick.  If the tier'd brick order does matter, then I may have added the tierd bricks in the wrong order, as a 'gluster v status' shows the order HotB, HotA, ColdA, ColdB.
20:02 glusterbot PatNarciso: hot-bricks<'s karma is now -1
20:03 PatNarciso My concern is: the migration process is disk and network intensive.  During migration, data appears to be going from NodeA-Hot<-->NodeB-Cold and NodeB-Hot<-->NodeA-Cold.
20:03 glusterbot PatNarciso: NodeA-Hot<'s karma is now -1
20:03 glusterbot PatNarciso: NodeB-Hot<'s karma is now -1
20:03 PatNarciso And; to make my matters worse: one of my Hot bricks is full while another brick is nearly empty.  The migration process fails with msg 'disk full'.   I was curious if it was certain the order mattered; before I initiate downtime to resolve.
20:04 squizzi joined #gluster
20:04 dnorman joined #gluster
20:17 dnorman joined #gluster
20:19 mhulsman joined #gluster
20:33 nocs joined #gluster
20:33 arpu joined #gluster
20:35 vbellur joined #gluster
20:37 msvbhat joined #gluster
20:37 mhulsman joined #gluster
20:37 alvinstarr joined #gluster
20:40 raffo_ joined #gluster
20:41 raffo_ joined #gluster
21:01 vbellur joined #gluster
21:13 Wizek_ joined #gluster
21:21 vbellur joined #gluster
21:24 Pupeno joined #gluster
21:27 derjohn_mob joined #gluster
21:35 panina joined #gluster
21:37 Pupeno joined #gluster
21:39 gem joined #gluster
21:54 hackman joined #gluster
22:02 Wizek_ joined #gluster
22:10 Pupeno joined #gluster
22:10 Pupeno joined #gluster
22:20 dnorman joined #gluster
22:26 panina joined #gluster
22:46 virusuy joined #gluster
23:18 Pupeno joined #gluster
23:32 dnorman joined #gluster
23:32 Pupeno joined #gluster
23:47 dnorman joined #gluster
23:50 Pupeno joined #gluster
23:55 vbellur joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary