A trick of using “Find usages” in IntelliJ IDEA as Go IDE

Recently, I am using IntelliJ IDEA as a Go IDE to browse Docker Swarm code. When I want to search where Discovery.Watch method intoken package is called, the “Find usages“(Alt+F7) function of IntelliJ IDEA gives me a confusion:

method

It just prints one occurrence: the test code of token package. It doesn’t make sense, where the hell is the Discovery.Watch method called? When I search the usages of Watch in Watcher interface which token.Discovery satisfies by accident, I catch where theDiscovery.Watch method is used:

spec

The caveat from this lesson is if you can’t find where methods of a package are called, you should try to find the interfaces where the packages satisfy, maybe it will give you the answer.

 

Use Intellij IDEA as a Golang IDE

My Intellij IDEA is v15.0.1 community edition, and has been installed with golang-plugin. My Golang workspace is like this:

GOPATH=C:\Work\gocode

(1) Select “Create New Project“:

1

(2) Select “Go“, then “Next“:

2

(3) Select “SDK“, then “Next“:

3

(4) The important step coming! The up setting is for storing IDEA project files, while the down is for Golang workspace, then “Finish“:

4

(5) Now, use Intellij IDEA as a Golang IDE:

5
Enjoy it!

Build Apache Spark Application in IntelliJ IDEA 14.1

My Operating System is Windows 7, so this tutorial may be little difference for your environment.

Firstly, you should install Scala 2.10.x version on Windows to run Spark, else you would get errors like this:

Exception in thread "main" java.lang.NoSuchMethodError: scala.collection.immutable.HashSet$.empty()Lscala/collection/immutable/HashSet;
        at akka.actor.ActorCell$.<init>(ActorCell.scala:305)
        at akka.actor.ActorCell$.<clinit>(ActorCell.scala)
        at akka.actor.RootActorPath.$div(ActorPath.scala:152)
        ......

Please refer this post.

Secondly, you should install Scala plugin and create a Scala project, you can refer this document: Getting Started with Scala in IntelliJ IDEA 14.1.  

After all the above steps are done, the project view should like this:

21

Then follow the next steps:

(1) Select “File” -> “Project Structure“:

22

(2) Select “Modules” -> “Dependencies” -> “+” -> “Library” -> “Java“:

23

(3) Select spark-assembly-x.x.x-hadoopx.x.x.jar, press OK:

24

(4) Configure Library, press OK:

25

(5) The final configuration likes this:

26

(6) Write a simple CountWord application:

import org.apache.spark.SparkContext
import org.apache.spark.SparkConf

object CountWord{
  def main(args: Array[String]) {
    System.setProperty("hadoop.home.dir", "c:\\winutil\\")

    val logFile = "C:\\spark-1.3.1-bin-hadoop2.4\\README.md" // Should be some file on your system
    val conf = new SparkConf().setAppName("Simple Application").setMaster("local")
    val sc = new SparkContext(conf)
    val logData = sc.textFile(logFile, 2).cache()
    val numAs = logData.filter(line => line.contains("a")).count()
    val numBs = logData.filter(line => line.contains("b")).count()
    println("Lines with a: %s, Lines with b: %s".format(numAs, numBs))
  }
}

Please notice “System.setProperty("hadoop.home.dir", "c:\\winutil\\")” , You should downloadwinutils.exe and put it in the folder: C:\winutil\bin. For detail information, you should refer the following posts:
a) Apache Spark checkpoint issue on windows;
b) Run Spark Unit Test On Windows 7.

(7) The final execution likes this:

27

 

The following part introduces creating SBT project:

(1) Select “New project” -> “Scala” -> “SBT“, then click “Next:

sbt1

(2) Fill the “project name” and “project location“, then click “Finish“:

sbt2

(3) In Windows, modify the scala version to 2.10.4 in build.sbt:

sbt4

(4) Add spark package and create an scala object in “src -> main -> scala-2.10” folder, the final file layout likes this:sbt5(5) Run it!

You can also build a jar file:
File” -> “Project Structure” -> “Artifacts“, then select options like this:

sbt6

Refer this post in stackoverflow.

Then using spark-submit command execute jar package:

C:\spark-1.3.1-bin-hadoop2.4\bin>spark-submit --class "CountWord" --master local
[4] C:\Work\Intellij_scala\CountWord\out\artifacts\CountWord_jar\CountWord.jar
15/06/17 17:05:51 WARN NativeCodeLoader: Unable to load native-hadoop library fo
r your platform... using builtin-java classes where applicable
[Stage 0:>                                                          (0 + 0) / 2]
[Stage 0:>                                                          (0 + 1) / 2]
[Stage 0:>                                                          (0 + 2) / 2]

Lines with a: 60, Lines with b: 29

Getting Started with Scala in IntelliJ IDEA 14.1

This tutorial uses IntelliJ IDEA 14.1.3 version.

Prerequisites:

You should install Java and Scala first.

(1) Install Scala plugin:

a) After installing IntelliJ IDEA successfully, we need to install Scala plugin first: In the welcome window, select Configure -> Plugins:  

0

b) Select “Install JetBrains Plugin...“:

2c) If your computer needs proxy, please click “HTTP Proxy Settings” to configure proxy, else ignore it:

3

 

d) Select Scala plugin, and click Install plugin to install it:

4

 

The installing progress is like this:

5

e) After installation, restart IntelliJ IDEA:

6

 

 

 

(2) Create Scala project:
a) Select “Create New Project:

11

b) Select “Scala” -> “Scala“, then click Next:

7

c) Select a valid name for project and a folder to store project files:

12

d) Fill Project SDK with JDK directory:

13

After selection, click “OK:

14

e) For Scala SDK, click “Create“. It will display the installed Scala, click “OK“:

15

f) Click “Finish“:

16

(3) Create Scala application:

a) Select src -> New -> Scala Class:

17

b) Select object as Kind value:

18

c) Write a simple “Hello World” program:

19

d) Select Run -> Run:

20

e) Select HelloWorld:

21

f) The application outputs “Hello World!“:

22

All is OK now!