Digaum no orfanato d chiquititas tem um clipe q mostra q a lavanderia é
junto cm o parquinho e em outro clipe aparece a horta do chico do lado da
cozinha...
o Luccas tem q baixa a bola dele na hora q ele for falar mal da Viih ou
desse video dela!! A dor d cutevelo dele pelo fato dela ter milhões d
inscritos, milhões d views mt a mais do q ele esta deixando ele meio q
revoltado neah?! ... q ele cnt agrandando la os cento e poucos mil
inscritos enquanto a Viih esta aqui com os milhões dela... 2 bjjs d luz :*
By, Li Haoyi This talk will explore the hands-on beginner experience of making things with Scala.js. In particular, it will cover a few topics: How to make a ...
A simple movie showing how to setup yout git to work on an windows OS - as an friend of mine had a hard time doing this. Sorry for a little confusion about the ...
Linking GitHub with Intellij IDEA (HD 720)
Hey How's it going bros and welcome to another video on Rycast Narm. Subscribe and like! Today I shall show you how to link GitHub with Intellij IDEA. Link for ...
Introduction to the State monad from Scalaz, focused on step-by-step examples. Source code available on github: https://github.com/mpilquist/scalaz-state-talk ...
Wow. That (IMO) is a particularly insidious bug that most likely would have
been really hard to find (for most people like me :-) ) Does this mean that
'state' should have been defined like this: def state[S, A](a: =>A):
State[S, A] = State { s => (s, a) } IOW pass the parameter by name instead
of value. Otherwise, it's too easy to use State.state in a place where you
could introduce a bug. Is there a downside to this?
I'd probably just use State.apply like this: State { s => (s,
callWebService(u)) } Another approach is to make sure it isn't the first
generator in the for-comprehension: for { _
I forgot to mention, but I did find it quite difficult to keep the context
between snippets of code. The changes were so small sometimes between
slides it was hard to remember what the old code looked like and what the
refactor actually did. Perhaps some sort of diff based output might help
with this? Not sure how it can be done cleanly with so little space on
screen though.
Very good presentation ! You really helped me to catch the concept One
remark: at 1:05, you explain that the webservice must be called when the
state run, not constructed. But your solution does not work: the first line
of the for will be evaluted when you call retrieve. And State.state is a
call by value. So the webservice is called too early.
Unfortunately, I didn't record the first presentation. It was an
introduction to scalaz 7 and covered things like option enhancements,
validation, and some of the major typeclasses. The slides are available on
slideshare. I updated the video description to include a link to the slides
from that talk.
Yep. In order to call point, you need an Applicative for QueryStateS in
scope, which is available in this case. Generally, any place that was using
the Pointed typeclass can now use Applicative#point instead.
At 1:46, you suggest using Pointed. Pointed got removed from scalaz in
December; would replacing liftE with def liftE[A](s: String \/ A):
QueryStateES[A] = apply(s.point[QueryStateS]) work?
How to set up a Spark project with Scala IDE Maven and GitHub
In this video tutorial I show how to set up a Spark project with Scala IDE Maven and GitHub. In addition a word count tutorial example is shown. Links: pom.xml: ...
Thanks for the tutoriaI , followed all your steps and it worked.I could run
your code with success.
The thing is i have a project to do, which is related to Spark SQL lib and
currently the Scala IDE is not recognizing it. Can you suggest me what to
do? Can you also include the rest libraries(MLlib,GraphX and Streaming) ?
Thank you again!
+Windevil117 in order to use other libs you need to add the dependency to the pom.xml file:...<dependencies>...<dependency> <groupId>org.apache.spark</groupId> <artifactId>spark-sql_2.10</artifactId> <version>1.4.0</version></dependency>...</dependencies>Make sure to pick the right version. For the other libraries it works in the same way, just search on google for dependency you need.
Hello, thank you for you tutorial, I follow your instructions to build
environment. But I get a problem when I remove the Scala container from
project(at 6:30/21:17). My eclipse reminds me that “Unable to find a scala
library, Please add the scala container or scala library jar to build path.
and here is some of my pom.
scala
Scala Tools
//scala-tools.org/repo-releases/
true
false
org.apache.spark
spark-core_2.10
1.4.0
org.scala-tools
maven-scala-plugin
compile
testCompile
src/main/scala
-Xms64m
-Xmx1024m
Could you please tell me how I can solve it?
Thank you
+Scott Zhou try to clean the project. If it does not work add this dependency to your pom:<dependency> <groupId>org.scala-lang</groupId> <artifactId>scala-compiler</artifactId> <version>2.10.4</version></dependency>
Any idea why I got a problem ?
tore: Block broadcast_0_piece0 stored as bytes in memory (estimated size
9.9 KB, free 1955.5 MB)
15/11/09 23:30:56 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory
on localhost:59229 (size: 9.9 KB, free: 1955.6 MB)
15/11/09 23:30:56 INFO SparkContext: Created broadcast 0 from textFile at
WordCount.scala:15
15/11/09 23:30:57 ERROR Shell: Failed to locate the winutils binary in the
hadoop binary path
java.io.IOException: Could not locate executable null\bin\winutils.exe in
the Hadoop binaries.
at org.apache.hadoop.util.Shell.getQualifiedBinPath(Shell.java:278)
at org.apache.hadoop.util.Shell.getWinUtilsPath(Shell.java:300)
at org.apache.hadoop.util.Shell.(Shell.java:293)
at org.apache.hadoop.util.StringUtils.(StringUtils.java:76)
at
org.apache.hadoop.mapred.FileInputFormat.setInputPaths(FileInputFormat.java:362)
at
org.apache.spark.SparkContext$$anonfun$hadoopFile$1$$anonfun$32.apply(SparkContext.scala:978)
I start working on spark few weeks ago. I think it will be so helpfull if you create a series about all spark's topics with mini-example scala program .