Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2016-04-11

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:33 cvstealth joined #gluster
00:36 bwerthmann joined #gluster
01:06 baojg joined #gluster
01:11 EinstCrazy joined #gluster
01:11 julim joined #gluster
01:12 B21956 joined #gluster
01:35 EinstCrazy joined #gluster
01:35 EinstCrazy joined #gluster
01:48 ilbot3 joined #gluster
01:48 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
02:15 EinstCrazy joined #gluster
02:27 harish joined #gluster
02:28 hagarth joined #gluster
02:41 ovaistariq joined #gluster
02:42 fcoelho joined #gluster
02:57 ramteid joined #gluster
03:08 volga629_ joined #gluster
03:08 rastar joined #gluster
03:09 volga629_ Hello Everyone, trying enable SSL on management and getting error  E [socket.c:206:ssl_dump_error_stack] 0-socket.management:   error:1408F10B:SSL routines:SSL3_GET_RECORD:wrong version number
03:09 volga629_ How to specify TLSv1 or similar ?
03:19 DV joined #gluster
03:32 shubhendu joined #gluster
03:32 atinm joined #gluster
03:53 nbalacha joined #gluster
04:01 kdhananjay joined #gluster
04:06 beeradb joined #gluster
04:07 itisravi joined #gluster
04:35 RameshN joined #gluster
04:38 gem joined #gluster
04:49 unforgiven512 joined #gluster
04:51 vmallika joined #gluster
04:58 poornimag joined #gluster
05:01 EinstCrazy joined #gluster
05:03 rafi joined #gluster
05:03 overclk joined #gluster
05:03 poornimag joined #gluster
05:05 kshlm joined #gluster
05:06 nbalacha joined #gluster
05:09 arcolife joined #gluster
05:12 ndarshan joined #gluster
05:13 hgowtham joined #gluster
05:14 karnan joined #gluster
05:17 jiffin joined #gluster
05:18 aspandey joined #gluster
05:27 Apeksha joined #gluster
05:28 edong23 joined #gluster
05:31 spalai joined #gluster
05:32 aravindavk joined #gluster
05:34 Bhaskarakiran joined #gluster
05:35 gowtham joined #gluster
05:35 nishanth joined #gluster
05:35 ovaistariq joined #gluster
05:52 ppai joined #gluster
05:53 karthik___ joined #gluster
05:59 mbukatov joined #gluster
06:02 Lee1092 joined #gluster
06:03 ovaistariq joined #gluster
06:03 ghenry joined #gluster
06:05 hchiramm joined #gluster
06:06 anil_ joined #gluster
06:10 skoduri joined #gluster
06:12 mhulsman joined #gluster
06:14 Wizek joined #gluster
06:15 EinstCrazy joined #gluster
06:24 javi404 joined #gluster
06:32 jtux joined #gluster
06:33 rastar joined #gluster
06:35 pur joined #gluster
06:35 EinstCrazy joined #gluster
06:37 [Enrico] joined #gluster
06:37 kdhananjay joined #gluster
06:37 itisravi joined #gluster
06:39 rouven joined #gluster
06:42 ramky joined #gluster
06:46 skoduri joined #gluster
06:48 atinm joined #gluster
06:51 fsimonce joined #gluster
06:51 RameshN joined #gluster
06:59 ovaistariq joined #gluster
07:01 Slashman joined #gluster
07:03 hackman joined #gluster
07:13 aravindavk joined #gluster
07:14 ivan_rossi joined #gluster
07:20 Vaelatern joined #gluster
07:24 jri joined #gluster
07:24 atinm joined #gluster
07:25 Bhaskarakiran joined #gluster
07:25 vmallika joined #gluster
07:27 shruti joined #gluster
07:31 Saravanakmr joined #gluster
07:31 rmerritt joined #gluster
07:31 ctria joined #gluster
07:34 javi404 joined #gluster
07:34 Bhaskarakiran joined #gluster
07:36 JesperA joined #gluster
07:38 kdhananjay joined #gluster
07:39 atalur joined #gluster
07:41 the-me joined #gluster
07:44 goretoxo joined #gluster
07:44 aravindavk joined #gluster
07:46 Gaurav_ joined #gluster
07:51 atinm joined #gluster
07:52 RameshN joined #gluster
08:05 skoduri joined #gluster
08:06 atalur joined #gluster
08:10 [diablo] joined #gluster
08:13 jwd joined #gluster
08:13 arcolife joined #gluster
08:14 cliluw joined #gluster
08:18 scobanx joined #gluster
08:18 baojg joined #gluster
08:19 spalai joined #gluster
08:29 anil_ joined #gluster
08:30 kdhananjay joined #gluster
08:31 baojg joined #gluster
08:32 harish_ joined #gluster
08:34 jiffin joined #gluster
08:42 auzty joined #gluster
08:43 kovshenin joined #gluster
08:47 atalur joined #gluster
08:53 spalai joined #gluster
09:01 Rasathus joined #gluster
09:19 jordie joined #gluster
09:22 rouven joined #gluster
09:23 d0nn1e joined #gluster
09:33 kdhananjay joined #gluster
09:35 itisravi joined #gluster
09:37 kdhananjay joined #gluster
09:43 hgowtham joined #gluster
09:45 aspandey joined #gluster
09:45 Gnomethrower joined #gluster
09:51 social joined #gluster
09:59 muneerse2 joined #gluster
10:03 social joined #gluster
10:09 harish_ joined #gluster
10:11 atalur_ joined #gluster
10:27 kdhananjay joined #gluster
11:03 Wizek joined #gluster
11:07 gem joined #gluster
11:10 johnmilton joined #gluster
11:34 Wizek joined #gluster
11:40 hgowtham joined #gluster
11:42 spalai joined #gluster
11:46 mowntan joined #gluster
11:52 ppai joined #gluster
12:05 unclemarc joined #gluster
12:15 ppai joined #gluster
12:25 chirino joined #gluster
12:34 EinstCrazy joined #gluster
12:41 Ulrar Can I write a temporary file directly in a brick directory ? The name won't collide with any existing file
12:41 plarsen joined #gluster
12:42 Ulrar I'm missing space for a test
12:42 johnmilton joined #gluster
12:42 aravindavk joined #gluster
12:46 ira joined #gluster
12:46 chirino joined #gluster
12:50 johnmilton joined #gluster
12:50 mdavidson joined #gluster
12:55 johnmilton joined #gluster
13:03 xiu left #gluster
13:04 rastar joined #gluster
13:10 m0zes joined #gluster
13:11 aravindavk joined #gluster
13:12 rwheeler joined #gluster
13:18 muneerse joined #gluster
13:18 JoeJulian Ulrar: If you must, I would probably use the .glusterfs subdirectory on the brick. I can't think of any specific issues that could come from creating a file in there that shouldn't be there.
13:19 post-factum that is why it is recommended to place a brick under subdirectory and not under mountpoint directly
13:21 JoeJulian That, too. Plus other reasons.
13:23 Ulrar Yeah, never thought of that I have to admit
13:23 Ulrar I will use subdictories in the future
13:23 Ulrar Thanks
13:27 post-factum Ulrar: also, it is some kind of protection against mount failure. if the brick is not mounted, gluster will not try to heal it, if I understand correctly
13:28 muneerse2 joined #gluster
13:28 B21956 joined #gluster
13:33 shyam joined #gluster
13:36 mpietersen joined #gluster
13:45 anil joined #gluster
13:45 Gnomethrower joined #gluster
13:48 spalai left #gluster
13:48 nbalacha joined #gluster
13:55 lpabon joined #gluster
13:56 kdhananjay joined #gluster
13:58 jwaibel joined #gluster
14:04 volga629_ Hello Everyone, getting this error when enable SSL on management
14:04 volga629_ E [socket.c:206:ssl_dump_error_stack] 0-socket.management:   error:1408F10B:SSL routines:SSL3_GET_RECORD:wrong version number
14:04 volga629_ E [socket.c:2389:socket_poller] 0-socket.management: server setup failed
14:04 volga629_ E [socket.c:2557:socket_spawn] 0-socket.management: could not create poll thread
14:04 jiffin joined #gluster
14:05 volga629_ run on fedora23 server
14:05 volga629_ glusterfs-3.7.10-1.fc23.x86_64
14:08 haomaiwang joined #gluster
14:09 haomaiwa_ joined #gluster
14:10 haomaiwang joined #gluster
14:11 haomaiwang joined #gluster
14:12 post-factum @paste
14:12 glusterbot post-factum: For a simple way to paste output, install netcat (if it's not already) and pipe your output like: | nc termbin.com 9999
14:12 post-factum volga629_: ^^
14:12 haomaiwang joined #gluster
14:12 volga629_ ok I will use paste
14:13 haomaiwa_ joined #gluster
14:14 ndevos post-factum: what distribution do you use?
14:15 Hesulan joined #gluster
14:19 aravindavk joined #gluster
14:19 hchiramm joined #gluster
14:19 shirwah joined #gluster
14:20 mpieters_ joined #gluster
14:22 post-factum ndevos: arch as a desktop and personal server, centos as a production at work
14:23 ndevos post-factum: ah, ok :)
14:23 post-factum why?
14:23 ndevos oh, I'll build rpms for xglfs for my own testing, and I may share or push the .spec somewhere
14:24 post-factum that would be nice. should we, probably, include spec into source tree under contrib/?
14:24 post-factum I could prepare PKGBUILD as well
14:25 ndevos maybe, yes, I would like a "make rpms" or similar target :)
14:25 post-factum it is possible with cmake without writing spec, afaik
14:26 ndevos maybe, but I'm not too familiar with cmake... I've got an ugly spec that works for me ;-)
14:26 arcolife joined #gluster
14:26 post-factum https://cmake.org/cmake/hel​p/v3.5/module/CPackRPM.html
14:26 glusterbot Title: CPackRPM CMake 3.5.1 Documentation (at cmake.org)
14:28 ndevos thanks, I might try that at one point
14:28 overclk joined #gluster
14:28 post-factum nevertheless, I'm OK with separate spec file
14:29 shirwah Hi All, is it possible to create distributed/replicated volume using only two servers?
14:29 post-factum shirwah: yes
14:30 JoeJulian When the number of bricks you assign to a volume is a multiple of the replica, it is a distributed-replicated volume.
14:31 cliluw joined #gluster
14:31 post-factum shirwah: gluster volume create SoMeVoLuMe replica 2 host1:/brick1 host2:/brick1 host1:/brick2 host2:/brick2
14:32 post-factum shirwah: ^^ this is 2x2 distributed-replicated
14:32 dlambrig_ joined #gluster
14:32 shirwah post-facrum: Many thanks
14:32 shirwah <@JoeJulian>: Many thanks
14:33 post-factum spelling nick correctly makes one kitty in the world happier
14:33 JoeJulian You're welcome.
14:34 baojg joined #gluster
14:36 shirwah post-factum /JoeJulian - I'm working in an environment which has glusterfs 3.5 repo available, should i advice them to use 3.7?
14:37 shaunm joined #gluster
14:38 post-factum shirwah: 3.7 is lovely
14:38 post-factum shirwah: well, mostly ;)
14:39 bennyturns joined #gluster
14:40 ira joined #gluster
14:43 wushudoin joined #gluster
14:43 shirwah post-factum - Thanks :)
14:44 farhorizon joined #gluster
14:44 wushudoin joined #gluster
14:45 kpease joined #gluster
14:48 baojg joined #gluster
14:56 gluco joined #gluster
15:10 skylar joined #gluster
15:17 spalai joined #gluster
15:22 dlambrig_ joined #gluster
15:26 DV joined #gluster
15:27 jwd joined #gluster
15:27 hagarth joined #gluster
15:45 skylar joined #gluster
15:46 JoeJulian post-factum: I'm just starting to catch up from being away for two weeks. xglfs is great!
16:04 skoduri joined #gluster
16:11 vmallika joined #gluster
16:11 bennyturns joined #gluster
16:11 bwerthmann joined #gluster
16:11 coredump joined #gluster
16:30 jwaibel joined #gluster
16:51 B21956 joined #gluster
16:55 post-factum JoeJulian: it is too simple to be so great
16:56 chaot_s joined #gluster
16:57 ivan_rossi left #gluster
16:59 julim joined #gluster
17:00 DV joined #gluster
17:01 DV joined #gluster
17:05 rafi joined #gluster
17:10 om joined #gluster
17:16 hackman joined #gluster
17:16 atalur_ joined #gluster
17:17 dlambrig_ joined #gluster
17:19 spalai joined #gluster
17:26 JoeJulian I just like seeing code submission from outside Red Hat.
17:29 JoeJulian @later tell ivan_rossi The number of landings was equal to the number of takeoffs. Just the way I like them.
17:29 glusterbot JoeJulian: The operation succeeded.
17:36 scobanx joined #gluster
17:38 bwerthmann joined #gluster
17:38 scobanx When heal needen for a file in disperse volume, does brick owner node starts the heal process? It needs to read other chunks from subvolume bricks recalculate its own chunks and write them?
17:46 spalai joined #gluster
17:46 sagarhani joined #gluster
17:49 JoeJulian Not sure about how disburse does heals yet. Why do you ask? Is something not working?
17:51 scobanx JoeJulian, I want to learn inner workings of heal. When a file being written to a subvolume and I interrupt the operation, this file shown in heal needed output. I want to learn what happens to heal it and which server do the actions.
18:08 JoeJulian scobanx: If you figure that out, let me know. If you get stuck trying to figure it out, I'd be happy to try helping or at least give pointers on where I would look (which would be in the source code).
18:10 jwd joined #gluster
18:24 chirino joined #gluster
18:38 DV joined #gluster
18:41 jlockwood joined #gluster
18:41 volga629_ Hello Everyone, issue with glusterfs SSL enabled on management http://fpaste.org/354249/40007714/
18:41 glusterbot Title: #354249 Fedora Project Pastebin (at fpaste.org)
18:42 volga629_ What possible cuase
18:42 JoeJulian Is that the complete log?
18:42 volga629_ cause
18:42 volga629_ that message is repeating in log
18:44 kkeithley 3.7.10?  SSL on management is borked in 3.7.10.  3.7.11 will be released soon (I thought it was going to be today) to address this.
18:45 JoeJulian kkeithley++
18:45 glusterbot JoeJulian: kkeithley's karma is now 24
18:45 volga629_ yes 3.7.10
18:45 volga629_ ok thanks then I will wait
18:47 nishanth joined #gluster
18:48 post-factum https://aur.archlinux.org/packages/xglfs
18:48 cliluw joined #gluster
18:48 post-factum xglfs is available in AUR
18:48 post-factum JoeJulian: ^^ you might be interested in that
18:50 JoeJulian post-factum++
18:50 glusterbot JoeJulian: post-factum's karma is now 8
18:52 m0zes joined #gluster
19:01 julim joined #gluster
19:03 kkeithley post-factum: what is AUR?
19:07 DV joined #gluster
19:07 post-factum kkeithley: Arch User Repository
19:07 kkeithley ah, cool
19:28 ira joined #gluster
19:39 luizcpg joined #gluster
19:44 spalai joined #gluster
19:48 kovshenin joined #gluster
19:51 jlockwood joined #gluster
20:16 jlockwood anybody, available for some 'benchmarking' advice?
20:16 DV joined #gluster
20:17 Wizek_ joined #gluster
20:19 JoeJulian jlockwood: My advice is always to benchmark against your expected actual use case.
20:20 jlockwood I'm kinda stuck trying to make a case for a NetApp swap out for home spun.
20:20 JoeJulian Don't only test average performance numbers, but also look for peaks.
20:21 JoeJulian What's your SLA with netapp?
20:22 jlockwood Meaning performance or Support?
20:22 JoeJulian Sorry, your uptime sla I meant to say.
20:22 JoeJulian My brain is foggy today. I picked up a cold in Italy and it's kicking my butt today.
20:23 papamoose joined #gluster
20:23 jlockwood Internally 4 9's is what we're on the hook for.
20:23 JoeJulian Building a 5 nines with gluster is very easy.
20:23 jlockwood but we've only had one substantial outage in 4 yrs
20:24 jlockwood My only problems is getting like for like performance on writes.
20:25 jlockwood Unfortunately I don't have the spindles to throw at the POC systems that I have built out in my netapp env.
20:27 spalai joined #gluster
20:27 JoeJulian Bulk write transfer speed, I've never had a problem filling the network with fuse. NFS doesn't perform quite as well at overall throughput.
20:28 chirino joined #gluster
20:28 jlockwood NFS on my initial testing as way off, was gonna clear that up with ganesha (or such was the plan )
20:30 Wizek__ joined #gluster
20:32 shyam joined #gluster
20:32 jlockwood currently I'm comparing fio and iozone results from a test VM and the performance on reads is very close, but writes are really really far apart.
20:32 jlockwood Since I'm very newbish to gluster I wanted to maker sure I wasn't making lunk head mistakes in setting up my poc system(s).
20:34 papamoose joined #gluster
20:36 daMaestro joined #gluster
20:41 bennyturns joined #gluster
20:44 DV joined #gluster
20:45 dlambrig_ joined #gluster
20:47 jwaibel joined #gluster
20:49 DN joined #gluster
20:49 bwerthmann joined #gluster
20:49 DN Hello, wanted to check if glusterfs support S3 API ie export the glusterfs data as S3 objects
20:50 DN I know it support swift, but avoiding the swift layer(object, container, auth server)
20:51 DN wanted to check if glusterfs exports the object storage as S3
20:52 DN anyone have S3 impleted on glusterfs ?
20:52 JoeJulian DN: #swiftonfile for that.
20:53 DN swiftonfile does S3 api ?
20:53 JoeJulian iirc
20:53 DN ok, thats good that Joe
20:53 DN *thanks
20:54 bwerthmann joined #gluster
20:58 JoeJulian jlockwood: what hypervisor are you using? What guest OS? What filesystem is the guest using? Those can all affect performance.
20:59 JoeJulian Check gluster volume top to get more insight as to what's happening.
20:59 johnmilton joined #gluster
20:59 skylar1 joined #gluster
21:00 jlockwood 14 LTS ext4
21:00 jlockwood esx 5.5
21:03 JoeJulian Ok, so a really old less-optimized filesystem on a nfs-only hypervisor. Not my favorite combination. ;)
21:03 jlockwood yeah, well, they always give me the best HW to 'test stuff out on' :p
21:03 JoeJulian :D
21:04 jlockwood I just managed to get off of a pair of 2950's for this, so I was starting to feel pretty good hehehehe
21:05 jlockwood but the guest is actually running off of the netapp, I'm just mounting target filesystems from there and running various I/O tests.
21:05 jlockwood just to be clear.
21:13 farhorizon joined #gluster
21:31 hagarth joined #gluster
21:34 spalai joined #gluster
21:40 shyam joined #gluster
21:41 JoeJulian If anybody here is a gluster expert that's looking to shop for a new job, send me a PM.
21:54 spalai joined #gluster
21:57 bwerthmann joined #gluster
21:58 hchiramm joined #gluster
22:10 chirino_m joined #gluster
22:18 hchiramm joined #gluster
22:22 DV joined #gluster
22:23 d0nn1e joined #gluster
22:25 gbox JoeJulian: Ciao!  Glad you're back. I actually tried to answer questions while you were gone--tough.  OTOH 3.7.6 runs beautifully for me now.
22:25 glusterbot gbox: gone's karma is now -1
22:28 JoeJulian gbox: Thanks! And I'm glad you got everything figured out. Answering questions is a wonderful way to improve your own expertise.
22:43 shaunm joined #gluster
22:59 Hesulan joined #gluster
23:00 DV joined #gluster
23:02 jlockwood joejulian, was just reading back, my 2 gluster peers are a pair of Dell R420's running centos 7 bricks are on zfs. Just realized I may have misunderstood your question.
23:03 JoeJulian Nah, I think we got it right. I wouldn't use zfs though.
23:03 jlockwood unless you actually did want to know what the system I was using to run my tests from.
23:05 jlockwood I thought ZFS what RH was building out as their storage product
23:06 JoeJulian xfs
23:06 JoeJulian RH won't touch zfs because of the licensing.
23:09 jlockwood I can always blow up and rebuild.
23:13 hchiramm joined #gluster
23:27 dlambrig_ joined #gluster
23:33 hchiramm joined #gluster
23:36 harish joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary