DataPipeline/Spark
Spark - Linux SBT 환경셋팅
wave35
2023. 3. 25. 21:06
[ SBT 설치 ]
- Ubuntu
echo "deb https://dl.bintray.com/sbt/debian /" | sudo tee -a /etc/apt/sources.list.d/sbt.list
sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 2EE0EA64E40A89B84B2DF73499E82A75642AC823
sudo apt-get update
sudo apt-get install sbt
- CentOS
curl https://bintray.com/sbt/rpm/rpm | sudo tee /etc/yum.repos.d/bintray-sbt-rpm.repo
sudo yum install sbt
참조 : https://twitter.github.io/scala_school/ko/sbt.html
[ 스칼라 설치 ]
$ cd ~
$ wget http://downloads.lightbend.com/scala/2.11.8/scala-2.11.12.rpm
$ sudo yum install scala-2.11.12.rpm
$ scala -version
project 디렉토리를 생성
$ home/user/projects]$ sbt
sbt를 통해 프로젝트 생성
$ sbt new scala/hello-world.g8
[ 예제 실행 ]
$ cd project_name
$ sbt
sbt console > run
로컬에서와 클러스터모드에서 Spark SparkConf사용 :