Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2016-06-27

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:02 masuberu joined #gluster
00:10 bluenemo joined #gluster
00:12 luizcpg_ joined #gluster
00:34 F2Knight joined #gluster
00:41 luizcpg joined #gluster
00:57 shdeng joined #gluster
01:18 jiffin joined #gluster
01:23 om joined #gluster
01:33 masuberu joined #gluster
01:36 armyriad joined #gluster
01:40 Lee1092 joined #gluster
01:49 luizcpg_ joined #gluster
02:09 dnunez joined #gluster
02:17 mhorstman joined #gluster
02:19 mhorstman left #gluster
02:20 mhorstman joined #gluster
02:22 finbit1 joined #gluster
02:23 finbit1 left #gluster
02:31 gem joined #gluster
02:41 kshlm joined #gluster
02:46 jiffin joined #gluster
03:18 plarsen joined #gluster
03:23 jiffin1 joined #gluster
03:24 magrawal joined #gluster
03:31 luizcpg joined #gluster
03:52 itisravi joined #gluster
03:54 RameshN joined #gluster
03:58 shubhendu joined #gluster
04:07 sakshi joined #gluster
04:16 wushudoin joined #gluster
04:21 nehar joined #gluster
04:31 kdhananjay joined #gluster
04:31 harish_ joined #gluster
04:34 Bhaskarakiran joined #gluster
04:35 atalur joined #gluster
04:44 Apeksha joined #gluster
04:53 atinm joined #gluster
04:59 itisravi joined #gluster
05:02 ndarshan joined #gluster
05:03 kshlm joined #gluster
05:04 kramdoss_ joined #gluster
05:05 voidspacexyz[m] left #gluster
05:06 masuberu joined #gluster
05:08 aspandey joined #gluster
05:11 kotreshhr joined #gluster
05:14 ashiq joined #gluster
05:15 masuberu joined #gluster
05:16 prasanth joined #gluster
05:18 sbulage joined #gluster
05:26 karthik___ joined #gluster
05:28 aravindavk joined #gluster
05:31 Manikandan joined #gluster
05:34 prasanth joined #gluster
05:42 sac joined #gluster
05:44 kovshenin joined #gluster
05:46 hgowtham joined #gluster
05:47 sakshi joined #gluster
05:52 shruti joined #gluster
05:52 pgreg joined #gluster
05:52 rafi joined #gluster
05:53 jwd joined #gluster
05:54 gowtham joined #gluster
06:02 kshlm joined #gluster
06:08 [diablo] joined #gluster
06:08 [diablo] joined #gluster
06:11 rafi joined #gluster
06:22 jtux joined #gluster
06:25 skoduri joined #gluster
06:26 tom[] joined #gluster
06:30 karnan joined #gluster
06:34 rafi1 joined #gluster
06:36 anil_ joined #gluster
06:38 kdhananjay joined #gluster
06:41 poornimag joined #gluster
06:41 ramky joined #gluster
06:42 pur_ joined #gluster
06:43 renout_away joined #gluster
06:44 jiffin1 joined #gluster
06:45 ibotty joined #gluster
06:45 ibotty left #gluster
06:47 RameshN joined #gluster
06:51 rouven joined #gluster
06:51 nishanth joined #gluster
06:52 aravindavk joined #gluster
06:53 kramdoss_ joined #gluster
06:54 kotreshhr joined #gluster
06:57 shubhendu_ joined #gluster
07:01 kshlm joined #gluster
07:04 msvbhat joined #gluster
07:04 aspandey joined #gluster
07:04 rafi joined #gluster
07:06 xavih joined #gluster
07:06 malevolent joined #gluster
07:08 Saravanakmr joined #gluster
07:18 ivan_rossi joined #gluster
07:18 mbukatov joined #gluster
07:25 rafi joined #gluster
07:25 tom[] joined #gluster
07:26 jri joined #gluster
07:28 hackman joined #gluster
07:30 [Enrico] joined #gluster
07:36 rafi joined #gluster
07:37 Intensity joined #gluster
07:40 kdhananjay joined #gluster
07:42 Wizek joined #gluster
07:45 deniszh joined #gluster
07:50 shubhendu joined #gluster
07:51 aravindavk joined #gluster
07:54 karthik___ joined #gluster
08:00 RameshN joined #gluster
08:01 rafi1 joined #gluster
08:02 atinm joined #gluster
08:03 sakshi joined #gluster
08:04 Bhaskarakiran joined #gluster
08:05 Slashman joined #gluster
08:07 Gnomethrower joined #gluster
08:08 post-factum started upgrading to v3.7.12+extra patches
08:08 ashiq joined #gluster
08:08 post-factum looks good for first 2 nodes
08:08 kotreshhr joined #gluster
08:15 shubhendu_ joined #gluster
08:16 gem joined #gluster
08:16 jiffin joined #gluster
08:21 itisravi joined #gluster
08:23 Bhaskarakiran_ joined #gluster
08:26 rastar joined #gluster
08:26 post-factum but I've got some question about rebalance
08:26 Bhaskarakiran joined #gluster
08:27 post-factum saying, I performed rebalance once, and things got rebalanced. no volume layout changes made after rebalance. but after some time, saying, 1 month or so, I perform another rebalance. and there are files that need to be rebalanced!
08:27 post-factum why?
08:27 kshlm joined #gluster
08:29 shubhendu joined #gluster
08:34 kdhananjay joined #gluster
08:34 itisravi post-factum: possibly because you renamed files?
08:34 aspandey_ joined #gluster
08:34 post-factum itisravi: might be true, renaming occurs on the volume. but i thought that should be handled automatically, no?
08:36 itisravi renaming creates link-to files. Those get moved to the appropriate hashed subvol if you do  a rebalance.
08:37 paul98 joined #gluster
08:38 post-factum itisravi: makes sense now for me, thanks, but sounds like an architecture issue
08:50 atalur joined #gluster
08:54 kdhananjay joined #gluster
08:58 atinm joined #gluster
08:59 TvL2386 joined #gluster
09:06 gvandeweyer joined #gluster
09:12 gvandeweyer joined #gluster
09:18 jiffin1 joined #gluster
09:19 kdhananjay joined #gluster
09:19 kdhananjay left #gluster
09:19 kdhananjay joined #gluster
09:22 aspandey joined #gluster
09:22 itisravi_ joined #gluster
09:23 itisravi joined #gluster
09:44 mdavidson joined #gluster
09:55 ashiq joined #gluster
09:57 aravindavk joined #gluster
09:57 gem joined #gluster
10:02 DV joined #gluster
10:44 msvbhat joined #gluster
10:44 kshlm joined #gluster
10:47 kotreshhr joined #gluster
10:50 itisravi joined #gluster
10:57 shubhendu_ joined #gluster
10:59 nishanth joined #gluster
11:05 wadeholler joined #gluster
11:06 kotreshhr joined #gluster
11:08 Dogethrower joined #gluster
11:25 ashiq joined #gluster
11:30 rafaels joined #gluster
11:39 wadeholler joined #gluster
11:42 sakshi joined #gluster
11:43 DV joined #gluster
11:44 rouven joined #gluster
11:45 itisravi joined #gluster
11:46 Manikandan joined #gluster
11:47 itisravi_ joined #gluster
11:50 rafi joined #gluster
11:52 anoopcs 3.7.12
11:56 d0nn1e joined #gluster
11:59 sokratisg joined #gluster
11:59 sokratisg hello everyone
12:00 sokratisg quick question, I am having a replicated distributed volume on bricks of unequal size
12:00 sokratisg lately I added a new brick which surpasses all bricks (200GB, 100GB VS 1.2TB)
12:00 sokratisg thing is I cannot empty the existing bricks even after rebalancing
12:01 sokratisg any ideas of how to overcome this issue?
12:01 sokratisg (I do know bricks of unequal size are not recommended but now they are so I am trying to remove all of them and migrate to a new, recommended structure)
12:03 jiffin sokratisg: i didn't get ur point
12:03 sokratisg right now I have a single replicated volume which consists of 4 bricks
12:03 sokratisg brick01: 100GB
12:03 jiffin if u want to remove brick after addition , u need to execute remove brick as weel
12:03 sokratisg brick02: 200GB
12:04 sokratisg etc.
12:04 sokratisg I've already tried evacuating the bricks by using remove-brick but they are not of equal size
12:05 sokratisg so during the process some brick are 100% filled so I start receiving failures and I end up stopping the operation
12:06 sokratisg is there any other way to bypass the unequality of brick sizes I currently have as to empty all of them into the new big brick I've already added (1.2TB)
12:06 sokratisg ?
12:07 sokratisg I checked the replace-brick command but I think this is for moving existing bricks to new, empty ones
12:07 sokratisg this is not the case
12:08 jiffin if I understand u correctly , initially volume has two brick brick01(100GB) brick02(200GB), then u add a new large brick with 1.2 TB size, u performed rebalance, right?
12:09 karnan joined #gluster
12:09 masuberu joined #gluster
12:10 sokratisg jiffin correct
12:10 sokratisg brick01 (100GB), brick02 (200GB), brick03 (100GB), brick05 (200GB)
12:11 sokratisg and now I've added a new brick06 (1.2TB) and did rebalance
12:11 sokratisg but brick06 got only ~80GB of data after the balancing operation, which is reasonable considering that all the other bricks are much lower in size
12:11 jiffin if u have distributed replicated set up , then u need to add a new pair instead of one, right?
12:12 sokratisg yes, replication factor = 3, so everything is trippled, I just didn't mention it to simplify our conversation
12:13 jiffin basically u need to perform the remove-brick operation with each pair, it may take some time
12:13 jiffin gluster v <volname> remove-brick <brick list>
12:14 jiffin remove-brick will copy the existing data to remaining pairs
12:15 sokratisg that's what I am trying to do know but needs proper planning and I am not sure if everything will fit during emptying
12:15 sokratisg so far I have been filled with failures due to space contraints
12:15 sokratisg thought to ask just in case there is any other better approach
12:17 aravindavk joined #gluster
12:19 skoduri joined #gluster
12:20 jiffin sokratisg: i don't think so, otherwise create a new volume using new bricks, mount that volume and then copy/move entire data from existing volume to new
12:20 wadeholler joined #gluster
12:20 jiffin it may save ur time
12:20 cvstealth joined #gluster
12:21 sokratisg what needs to be done if I do the manual copying ?
12:21 sokratisg I mean in terms of metadata and stuff
12:21 sokratisg I can easily do an rsync
12:26 RameshN joined #gluster
12:28 jiffin sokratisg: brick to brick copying is not recommended , atleast copy it from existing brick to mount point of new volume
12:30 kovshenin joined #gluster
12:31 luizcpg joined #gluster
12:32 magrawal joined #gluster
12:32 karnan joined #gluster
12:35 glafouille joined #gluster
12:39 dnunez joined #gluster
12:40 nehar joined #gluster
12:43 atinm joined #gluster
12:54 sokratisg thank you jiffin for all the help, will try and get back if I have more questions. really appreciate it.
12:59 ic0n joined #gluster
13:07 kotreshhr joined #gluster
13:07 kotreshhr left #gluster
13:28 shyam joined #gluster
13:31 wadeholler joined #gluster
13:33 wadeholler joined #gluster
13:34 msvbhat joined #gluster
13:36 baojg joined #gluster
13:36 hybrid512 joined #gluster
13:46 plarsen joined #gluster
13:46 dlambrig_ joined #gluster
13:50 dlambrig_ left #gluster
13:59 arcolife joined #gluster
14:08 lalatenduM joined #gluster
14:09 glafouille joined #gluster
14:10 atinm joined #gluster
14:16 Seth_Karlo joined #gluster
14:18 Seth_Karlo Hey all, is this the correct place to report an error with download.gluster.org? I've spent a good portion of my afternoon debugging an issue that essentially boiled down to the 3.8 folder of the repository not having an EPEL.repo folder (it's named EPEL)
14:18 ghenry joined #gluster
14:19 Seth_Karlo http://download.gluster.org/p​ub/gluster/glusterfs/LATEST/ is pointing to http://download.gluster.org/pub​/gluster/glusterfs/3.8/LATEST/, and the naming for EPEL is not the same as http://download.gluster.org/pub​/gluster/glusterfs/3.7/LATEST/
14:19 glusterbot Title: Index of /pub/gluster/glusterfs/LATEST (at download.gluster.org)
14:20 Seth_Karlo Which is breaking yum all over the place
14:20 harish__ joined #gluster
14:20 manous joined #gluster
14:20 kkeithley http://download.gluster.org/pub/gluste​r/glusterfs/3.8/3.8.0/EPEL/EPEL.README
14:20 squizzi joined #gluster
14:21 Seth_Karlo It's not a very clear message, does this mean download.gluster.org will no longer be hosting packages from 3.8 onwards?
14:22 Seth_Karlo And if that is the case, why not leave the latest links to 3.7 so people don't need to update their configuration? Surely an older version is more convenient than a 404 error?
14:30 kkeithley I've posted three messages in the last month to the gluster-users and gluster-devel mailing list warning that this change was coming.  Edit your /etc/yum.repos.d/gluster-epel.repo and change .../LATEST/... to .../3.7/LATEST/...
14:30 kkeithley last month or so
14:33 gowtham joined #gluster
14:34 ndevos repost from #centos-devel
14:34 ndevos 16:32 < ndevos> does anyone know if there is a (centos) VM-image that boots and shutdowns to confirm QEMU is working correctly?
14:34 ndevos 16:33 < ndevos> I'd like to add a CI test that runs the VM on top of Gluster and reports a failure in case someone broke it
14:35 msvbhat joined #gluster
14:38 ben453 joined #gluster
14:39 dgandhi joined #gluster
14:42 Manikandan joined #gluster
14:47 ctria joined #gluster
14:51 wushudoin joined #gluster
14:52 wushudoin joined #gluster
15:01 wushudoin joined #gluster
15:01 wushudoin joined #gluster
15:03 kpease joined #gluster
15:09 john51 joined #gluster
15:10 aravindavk joined #gluster
15:15 wushudoin joined #gluster
15:16 kpease joined #gluster
15:18 wushudoin joined #gluster
15:21 luizcpg joined #gluster
15:42 karnan joined #gluster
15:47 skylar joined #gluster
15:48 azilian joined #gluster
15:52 hackman joined #gluster
15:52 Slashman joined #gluster
15:58 lanning joined #gluster
15:59 misc ndevos: the cloud image should do the trick
15:59 misc ndevos: you can modify it with guestfs
16:03 ivan_rossi left #gluster
16:05 shubhendu joined #gluster
16:22 karnan joined #gluster
16:36 om2 joined #gluster
16:37 om2 left #gluster
16:42 manous joined #gluster
16:42 timotheus1_ joined #gluster
16:51 kramdoss_ joined #gluster
16:57 squizzi_ joined #gluster
17:04 jri joined #gluster
17:14 pampan joined #gluster
17:18 manous joined #gluster
17:30 skoduri joined #gluster
17:37 manous hello
17:37 glusterbot manous: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
17:37 manous i have replica 2  in my gluster volume
17:37 manous is it possible to change to 3 ?
17:40 JoeJulian manous: You bet. Just "... add-brick replica 3 <new brick>"
17:41 manous ok
17:41 manous thanks
17:41 JoeJulian You're welcome.
17:44 nbalacha joined #gluster
17:47 nishanth joined #gluster
17:48 manous i have another issue
17:48 manous i want to use gluster as backend storage on kvm
17:48 manous i found this link http://www.gluster.org/community/documenta​tion/index.php/Libgfapi_with_qemu_libvirt
17:50 manous i create volume , but didn't have priviliege
17:50 Philambdo joined #gluster
17:50 manous is it right way to do that ?
17:51 jwd joined #gluster
17:53 JoeJulian The glusterd management daemon is privileged and actually performs all the actions necessary, so if you can run the gluster command unprivileged (normally you cannot because you won't have access to the log files) it will simply inform glusterd what command you're trying to run and glusterd will do all the work.
17:53 JoeJulian Short answer, yes, that's fine.
17:54 manous but my gluster is not on hypervisor kvm
17:56 JoeJulian That's fine. Your hypervisor is a client.
17:56 JoeJulian You can have thousands of clients that are not part of the cluster.
17:57 hchiramm joined #gluster
18:04 luizcpg joined #gluster
18:08 shyam joined #gluster
18:20 manous joined #gluster
18:37 luizcpg joined #gluster
18:38 karnan joined #gluster
18:45 deniszh joined #gluster
18:56 plarsen joined #gluster
19:16 nishanth joined #gluster
19:19 hackman joined #gluster
19:51 julim joined #gluster
20:14 luizcpg joined #gluster
20:22 ahino joined #gluster
20:46 shyam joined #gluster
20:46 deniszh joined #gluster
20:47 DV joined #gluster
20:53 ahino joined #gluster
21:39 johnmilton joined #gluster
21:50 papamoose joined #gluster
21:58 johnmilton joined #gluster
22:03 F2Knight joined #gluster
22:26 nishanth joined #gluster
22:35 levi501d joined #gluster
22:36 levi501d has anyone here used Ganesha?
22:37 levi501d I am trying to force it to bind to 127.0.0.1 but there doesn't seem to be a way to set that in the config, they have NFS_CORE_PARAMS but adding that to ganesha.conf just breaks it.
23:30 tg2 joined #gluster
23:30 levi501d not sure yet, but I'm fairly certain Ganesha vs NFS on gluster is much worse, I got NFS on gluster working in less than a minute, I'm still trying to configure Ganesha, their docs are horrid
23:36 muneerse joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary