安装

下载

项目地址:http://azkaban.github.io/
下载地址:http://azkaban.github.io/downloads.html
文档地址:http://azkaban.github.io/azkaban/docs/latest/
因为本文选择web和executor单独部署,采用mysql做存储(其他模式请看文档)。所以需下载以下文件:

  • azkaban-web-server-2.5.0.tar.gz
  • azkaban-executor-server-2.5.0.tar.gz
  • azkaban-sql-script-2.5.0.tar.gz

2.5.0是当前的版本号,根据版本不同修改之。下载完后解压。

Mysql

azkaban-sql-script解压完之后有很多sql,新建一个数据库(本文azkaban)和用户,然后去执行create-all-sql-2.5.0.sql。

mysql>CREATE DATABASE azkaban;
mysql>CREATE USER ‘username’@’%’ IDENTIFIED BY ‘password’;
mysql>GRANT SELECT,INSERT,UPDATE,DELETE ON .* to ‘‘@’%’ WITH GRANT OPTION;
mysql>source /PATH/create-all-sql-2.5.0.sql;

配置web

修改conf/azkaban.properties,主要是以下属性:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
#设置项目名称
azkaban.name=BigData
#设置项目副标题
azkaban.label=besttone
#一定要设置为上海,否则按美国时间执行
default.timezone.id=Asia/Shanghai

database.type=mysql
mysql.port=3306
mysql.host=localhost
#改为自己的数据库名称
mysql.database=azkaban
#改为自己的数据库账号
mysql.user=azkaban
#改为自己的数据库密码
mysql.password=azkaban

mail.sender=email账号
mail.host=email服务器
mail.user=email账号
mail.password=email密码

#Azkaban Jetty 服务设置,先安装此配置,后面再详细介绍生产方法.
jetty.maxThreads=25
jetty.ssl.port=8443
jetty.port=8081
jetty.keystore=keystore
jetty.password=keystore密码
jetty.keypassword=keystore密码
jetty.truststore=keystore
jetty.trustpassword=keystore密码

生成keystore

1
2
#命令行执行以下命令,密码输入上一步keystore的密码,组织结构和国家(CN)省份城市随便填。
/PATH/TO/JAVA_HOME/bin/keytool -genkey -keystore keystore -alias jetty -keyalg RSA

执行完,在当前目录下就有一个keystore文件,复制到azkaban-web-2.5.0目录下。如果想新建文件夹放置,则对应修改配置里面jetty.keystore和jetty.truststore的路径。

启动web

1
2
#后台启动
nohup ./bin/azkaban-web-start.sh &

启动玩访问http://localhost:8443/,输入azkaban/azkaban即可进入(用户名和密码在conf/azkaban-users.xml中配置)。
如需停止,执行bin下的shutdown.sh即可。

配置executor

进入executor目录,修改azkaban.properties

1
2
3
4
5
6
7
default.timezone.id=Asia/Shanghai
database.type=mysql
mysql.port=3306
mysql.host=localhost
mysql.database=azkaban
mysql.user=azkaban
mysql.password=azkaban

启动executor

1
2
#后台启动
nohup ./bin/azkaban-executor-start.sh &

如需停止,执行bin下的shutdown.sh即可。

完成

至此,程序启动完成,可以开始使用了。

使用

下面就介绍下如何使用azkaban调度spark。

写job

新建文件,以.job结尾命名。
写文件,文件包含type,command,dependencies(非必须)等
例如:

1
2
3
4
type=command
# 执行sh脚本,建议这样做,后期只需维护脚本就行了,azkaban定义工作流程
command=/PATH/TO/SPARK/bin/spark-submit --class x.x.x.XX --master spark://x.x.x.x:8070 xxx.jar
dependencies=其他的job文件的名字(如果有前置任务的时候才加)

注:command中的x自行替换,也可以把指令写成脚本,然后command写sh xxxxx.sh,以后维护这个脚本即可。

多个任务,就写多个job文件。dependencies中写好前置关系,不可成环。

打包上传

将job文件和spark程序打出的jar包放置在同一目录,然后一起压缩成zip文件。
在azkaban的web页面上新建项目,然后选择zip压缩文件上传。

执行

在项目页面点击进入项目,点击execute flow,可以选择直接执行,或者定时执行,根据自己的实际情况选择。

写HBase

方式一

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
val conf = new SparkConf().setAppName(this.getClass.getSimpleName)
val sc = new SparkContext(conf)

val data = ... //要写HBase的RDD
val hConf = HBaseConfiguration.create()
hConf.set("hbase.zookeeper.quorum","10.10.40.112")
hConf.set("hbase.zookeeper.property.clientPort","2181")
//hConf.set("hbase.rootdir","hdfs://10.10.40.111:9000/hbase113")
//hConf.setBoolean("hbase.cluster.distributed", true)
//hConf.setInt("hbase.client.scanner.caching", 2000)
//hConf.set("zookeeper.znode.parent","/hbase")

hConf.set("hbase.defaults.for.version.skip","true")
hConf.set(TableOutputFormat.OUTPUT_TABLE,"user_m_info")//要写入的表名
val job = new Job(hConf)
job.setOutputKeyClass(classOf[ImmutableBytesWritable])
job.setOutputValueClass(classOf[Result])
job.setOutputFormatClass(classOf[TableOutputFormat[ImmutableBytesWritable]])

dData..map{
case (mid,tag,value) =>
val put = new Put(Bytes.toBytes(mid))
put.add("m".getBytes,tag.getBytes,Bytes.toBytes(value))//列族为m
(new ImmutableBytesWritable(),put)
}.saveAsNewAPIHadoopDataset(job.getConfiguration)

方式二

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
val conf = new SparkConf().setAppName(this.getClass.getSimpleName)
val sc = new SparkContext(conf)

val data = ... //要写HBase的RDD
data.foreachPartition{x=>
val hConf = HBaseConfiguration.create()
hConf.set("hbase.zookeeper.quorum","192.168.0.180")
hConf.set("hbase.zookeeper.property.clientPort","2181")
hConf.set("hbase.defaults.for.version.skip","true")
val table = new HTable(hConf,TableName.valueOf("user_m_info"))//表名
table.setAutoFlush(false,false)
table.setWriteBufferSize(3*1024*1024)
x.foreach{y=>
val put = new Put(Bytes.toBytes(y._1._1))
put.addColumn("m".getBytes,(y._1._2.toString).getBytes,Bytes.toBytes(y._2.toString))
table.put(put)
}
table.flushCommits()
}

读HBase

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
val conf = new SparkConf().setAppName(this.getClass.getSimpleName)
val sc = new SparkContext(conf)

val hConf = HBaseConfiguration.create()
hConf.set("hbase.zookeeper.quorum","10.10.40.112")
hConf.set("hbase.zookeeper.property.clientPort","2181")
//hConf.set("hbase.rootdir","hdfs://10.10.40.111:9000/hbase113")
//hConf.setBoolean("hbase.cluster.distributed", true)
//hConf.setInt("hbase.client.scanner.caching", 2000)
//hConf.set("zookeeper.znode.parent","/hbase")
hConf.set("hbase.defaults.for.version.skip","true")
hConf.set(TableInputFormat.INPUT_TABLE,"user_m_info")

val data = sc.newAPIHadoopRDD(hConf,classOf[TableInputFormat],classOf[ImmutableBytesWritable],classOf[Result])

data.foreach(println)

安装篇

下载

官网查看你要的版本下载(本文使用3.2.3)

1
wget http://download.redis.io/releases/redis-3.2.3.tar.gz

解压安装

1
2
3
tar xvzf redis-3.2.3.tar.gz
cd redis-3.2.3
make

make执行完后会出现src目录,进入src可以看见有redis-server、redis-cli等命令。

启动与测试

执行redis-server即可看到redis启动页面。
另开一个命令行界面执行redis-cli即可进去测试。-h 参数指定IP -p 参数可指定端口。

设置服务

如果想以服务的方式去启动redis,可以按照以下步骤来。

  • 首先复制redis子目录utils中的redis_init_script到/etc/init.d下,同时易名为redis。

    1
    cp redis-3.2.3/utils/redis_init_script  /etc/rc.d/init.d/redis
  • 修改脚本

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    #!/bin/sh
    #
    # chkconfig: 2345 80 90
    # Simple Redis init.d script conceived to work on Linux systems
    # as it does use of the /proc filesystem.

    REDISPORT=6379
    EXEC=/usr/local/bin/redis-server
    CLIEXEC=/usr/local/bin/redis-cli

    PIDFILE=/var/run/redis_${REDISPORT}.pid
    CONF="/etc/redis/${REDISPORT}.conf"

    case "$1" in
    start)
    if [ -f $PIDFILE ]
    then
    echo "$PIDFILE exists, process is already running or crashed"
    else
    echo "Starting Redis server..."
    $EXEC $CONF &
    fi
    ;;
    stop)
    if [ ! -f $PIDFILE ]
    then
    echo "$PIDFILE does not exist, process is not running"
    else
    PID=$(cat $PIDFILE)
    echo "Stopping ..."
    $CLIEXEC -p $REDISPORT shutdown
    while [ -x /proc/${PID} ]
    do
    echo "Waiting for Redis to shutdown ..."
    sleep 1
    done
    echo "Redis stopped"
    fi
    ;;
    *)
    echo "Please use start or stop as first argument"
    ;;
    esac

修改有两处:1、第三行为新增;2、第21行结尾处添加&,后台启动。

  • 将redis子目录src下的redis-server和redis-cli复制到/usr/local/bin/文件(脚本中第8/9行指示)

    1
    2
    cp redis-3.2.3/src/redis-server  /usr/local/bin/
    cp redis-3.2.3/src/redis-cli /usr/local/bin/
  • /etc目录下创建redis目录并复制配置到此目录(脚本文中第12行)

    1
    2
    mkdir /etc/redis
    cp redis-3.2.3/redis.conf /etc/redis/6379.conf
  • 注册服务

    1
    chkconfig --add redis
  • 启动服务

    1
    service redis start

以服务的方式启动已成功,即可使用,若需要停止,请看下一个步骤。

  • 停止服务
    1
    service redis stop

如需对外开放服务,请修改配置文件中的bind,配置对外服务机器的IP。

集成篇

当前只介绍Spark写数据到redis
导入jar

1
2
3
4
5
<dependency>
<groupId>redis.clients</groupId>
<artifactId>jedis</artifactId>
<version>2.9.0</version>
</dependency>

代码如下:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
import org.apache.spark.{SparkContext, SparkConf}
import redis.clients.jedis.Jedis
val conf = new SparkConf().setAppName(this.getClass.getSimpleName)
val sc = new SparkContext(conf)

val rdd = ... //要写入redis的数据,RDD[Map[String,String]]
rdd.foreachPartition{iter =>
val redis = new Jedis("10.10.40.111",6379,400000)
val ppl = redis.pipelined()//使用pipeline 更高效的批处理
iter.foreach{f=>
ppl.hmset(f("mid"),f)//当前采用的是Map类型,以mid为Key
}
ppl.sync()
}

安装篇

下载

1
wget https://download.elastic.co/elasticsearch/release/org/elasticsearch/distribution/tar/elasticsearch/2.3.5/elasticsearch-2.3.5.tar.gz

本文使用当前最新版本(2.3.5),具体其他版本信息可以去官网查看。

安装

  • 解压文件

    1
    tar -zxvf elasticsearch-2.3.5.tar.gz
  • 执行启动

    1
    2
    3
    # 以非root用户执行
    ./bin/elasticsearch
    #如果后台启动就加上 -d
  • 测试

    1
    curl -X GET http://localhost:9200/

配置

修改配置文件(地址为config/elasticsearch.yml)

1
vi config/elasticsearch.yml

主要修改以下参数:

参数 解释
*cluster.name 集群的名字,一样即为同一集群 es_test(默认值是elasticsearch)
*node.name 集群节点的名字 spark1(每个机器这个配置不一样)
*network.host 集群的对外服务IP 192.068.0.180(对外服务的主机IP)
*http.port 集群的对外服务端口 9200(默认就是9200)
node.master 是否有资格被选为主节点 true
node.data 是否存储索引数据 true
index.number_of_shards 索引分片个数 5(默认值)
index.number_of_replicas 索引副本个数 1(默认值)
path.conf 配置文件的存储路径 默认是config文件夹
path.data 索引数据的存储路径 默认是es下data文件夹,可设置多路径,用逗号隔开
path.work 临时文件的存储路径 默认是es下的work文件夹
path.logs 日志文件的存储路径 默认是es下的logs文件夹
path.plugins 插件的存放路径 默认是es下的plugins文件夹
bootstrap.mlockall 设置为true来锁住内存 true
http.max_content_length 内容的最大容量 默认100MB
http.enabled 是否对外提供服务 true
gateway.type gateway的类型 默认为local即为本地文件系统,可以设置为本地文件系统,分布式文件系统,HDFS和amazon s3
gateway.recover_after_nodes 设置集群中N个节点启动时进行数据恢复 默认为1
gateway.recover_after_time 设置初始化数据恢复进程的超时时间 5m(默认是5分钟)
gateway.expected_nodes 设置这个集群中节点的数量 默认是2
cluster.routing.allocation.node_initial_primaries_recoveries 初始化数据恢复时,并发恢复线程的个数 默认4
cluster.routing.allocation.node_concurrent_recoveries 添加删除节点或负载均衡时并发恢复线程的个数 默认4
indices.recovery.max_size_per_sec 设置数据恢复时限制的带宽,如入100mb 默认0,无限制
indices.recovery.concurrent_streams 设置这个参数来限制从其它分片恢复数据时最大同时打开并发流的个数 默认5
discovery.zen.minimum_master_nodes 设置这个参数来保证集群中的节点可以知道其它N个有master资格的节点 默认1
discovery.zen.ping.timeout 设置集群中自动发现其它节点时ping连接超时时间 3s(默认3秒)
discovery.zen.ping.multicast.enabled 设置是否打开多播发现节点 默认true
discovery.zen.ping.unicast.hosts 设置集群中master节点的初始列表,可以通过这些节点来自动发现新加入集群的节点 [“host1”, “host2:port”, “host3[portX-portY]”]

带*号的是必需配置

集群

复制去其他机器,修改node.name,启动即可。

插件安装

安装插件使用bin/plugin命令
先看下plugin的命令有哪些

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
#查看命令
./bin/plugin -h
#返回结果如下:
NAME
plugin - Manages plugins
SYNOPSIS
plugin <command>
DESCRIPTION
Manage plugins
COMMANDS
install Install a plugin
remove Remove a plugin
list List installed plugins
NOTES
[*] For usage help on specific commands please type "plugin <command> -h"

安装插件命令即为

1
2
# 安装kopf 后面的参数为github地址
./bin/plugin install lmenezes/elasticsearch-kopf

集成篇

参考官网下的介绍。
其中与spark的集成在这里

导入jar

1
2
3
4
5
<dependency>
<groupId>org.elasticsearch</groupId>
<artifactId>elasticsearch-spark_2.10</artifactId>
<version>2.3.2</version>
</dependency>

配置ES参数

1
2
3
4
5
6
7
8
9
10
11
12
13
import org.apache.spark.{SparkConf, SparkContext}
val conf = new SparkConf().setAppName(this.getClass.getSimpleName)

//自动创建index
conf.set("es.index.auto.create", "true")
// conf.set("es.nodes.wan.only", "true")
//对外服务地址
conf.set("es.nodes", "10.10.40.111")
//对外服务端口,默认是9200
conf.set("es.port","9222");
//非必须,本文需要用mid作为_id,故加上当前配置
//其他配置参考:https://www.elastic.co/guide/en/elasticsearch/hadoop/current/configuration.html
conf.set("es.mapping.id","mid")

写入ES

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
import org.elasticsearch.spark._

val sc = new SparkContext(conf)

val numbers = Map("one" -> 1, "two" -> 2, "three" -> 3)
val airports = Map("arrival" -> "Otopeni", "SFO" -> "San Fran")
val rdd = sc.makeRDD(Seq(numbers, airports))

//方法一
//saveToEs方法的参数即为:index/type
rdd.saveToEs("spark/docs")

//方法二
import org.elasticsearch.spark.rdd.EsSpark
EsSpark.saveToEs(rdd, "spark/docs")

读取ES

1
2
3
4
5
6
7
8
9
10
11
12
13
import org.apache.spark.SparkContext
import org.apache.spark.SparkContext._

import org.elasticsearch.spark._

...

val conf = ...
val sc = new SparkContext(conf)

val RDD = sc.esRDD("radio/artists")
//带参数的
val rdd2 = sc.esRDD("radio/artists", "?q=me*")

其他的使用方式请参照官网介绍。
更多文档可以参考这里

准备

官网下载zookeeper[下载地址]。

配置

将下载的压缩包解压

1
tar -zxvf zookeeper-3.4.6.tar.gz

配置环境变量

1
2
3
4
5
6
7
8
9
10
# 移动到自己习惯的地方,本例为/usr/local下面
mv zookeeper-3.4.6 /usr/local/
# 添加zookeeper的环境变量
vi /etc/profile

# 添加以下内容
export ZOOKEEPER_HOME=/usr/local/zookeeper-3.4.6
export PATH=$PATH:$ZOOKEEPER_HOME/bin

# classpath里面加上zk的lib
export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar:$ZOOKEEPER_HOME/lib

修改zk配置

1
2
3
4
5
6
# 进入zk配置目录
cd /usr/local/zookeeper-3.4.6/conf/
# 复制配置文件
cp zoo_sample.cfg zoo.cfg
# 配置配置文件
vi zoo.cfg
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
# The number of milliseconds of each tick
# 这个时间是作为 Zookeeper 服务器之间或客户端与服务器之间维持心跳的时间间隔,也就是每个 tickTime 时间就会发送一个心跳。
tickTime=2000
# The number of ticks that the initial
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just
# example sakes.
# Zookeeper 保存数据的目录,默认情况下,Zookeeper 将写数据的日志文件也保存在这个目录里。
dataDir=/home/zookeeper/data
# the port at which the clients will connect
# 这个端口就是客户端连接 Zookeeper 服务器的端口,Zookeeper 会监听这个端口,接受客户端的访问请求。
clientPort=2181
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
#
# Be sure to read the maintenance section of the
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#autopurge.purgeInterval=1
#server.1=192.168.0.181:2888:3888
#server.2=192.168.0.182:2888:3888
#server.3=192.168.0.183:2888:3888

initLimit:这个配置项是用来配置Zookeeper接受客户端(这里所说的客户端不是用户连接 Zookeeper 服务器的客户端,而是 Zookeeper 服务器集群中连接到 Leader 的 Follower 服务器)初始化连接时最长能忍受多少个心跳时间间隔数。当已经超过 10 个心跳的时间(也就是tickTime)长度后Zookeeper服务器还没有收到客户端的返回信息,那么表明这个客户端连接失败。总的时间长度就是 5*2000=10 秒。

syncLimit:这个配置项标识Leader与Follower之间发送消息,请求和应答时间长度,最长不能超过多少个 tickTime 的时间长度,总的时间长度就是 2*2000=4 秒

server.A=B:C:D:其中A是一个数字,表示这个是第几号服务器;B是这个服务器的 ip 地址;C 表示的是这个服务器与集群中的Leader服务器交换信息的端口;D 表示的是万一集群中的Leader服务器挂了,需要一个端口来重新进行选举,选出一个新的 Leader,而这个端口就是用来执行选举时服务器相互通信的端口。如果是伪集群的配置方式,由于B都是一样,所以不同的Zookeeper实例通信端口号不能一样,所以要给它们分配不同的端口号。

启动zk

先查看端口是否被占用

1
2
# 看2181是否已使用
netstat -nat

查看zk是否已启动

1
2
# 默认是只有一个jps返回,如果启动了hadoop,可能有多个,查看是否有QuorumPeerMain,没有则说明,尚未启动。
jps

进入bin下面,启动

1
./zkServer.sh start

如需查看是否启动,参见前两步。
停止的话执行以下命令

1
2
# bin目录下
./zkServer.sh stop

集群

集群配置

zoo.cfg

修改配置文件zoo.cfg,将最下方的三个server打开。

1
2
3
4
...
server.1=192.168.0.181:2888:3888
server.2=192.168.0.182:2888:3888
server.3=192.168.0.183:2888:3888

myid

在zoo.cfg配置的dataDir下面新建一个myid文件,里面内容为一个数字,用来标识当前主机,conf/zoo.cfg文件中配置的server.X中X为什么数字,则myid文件中就输入这个数字。

1
2
# 181 机器
echo "1" > /home/zookeeper/data/myid

集群启动

分别到三个机器下面执行启动命令

1
./zkServer.sh start

对矩阵的一些概念做备忘。

简介

  • 方阵
  • 对角矩阵
  • 对称矩阵
  • 转置矩阵
  • 逆矩阵
  • 正定矩阵

方阵

方阵就是行数与列数一样多的矩阵。n×n阶矩阵被称为n阶方阵。
如下:
$$\begin {Bmatrix}a&0&0&0\\0&b&0&0\\0&0&c&0\\0&0&0&d\end{Bmatrix}$$

4阶方阵。

对角矩阵

对角矩阵式除主对角线之外的元素皆为 0 的方阵,对角线上的元素可以为 0 或其他值。
常写为$diag(a_1,a_2,a_3,\cdots,a_n)$
如下:
$$\begin {Bmatrix}a&0&0&0\\0&b&0&0\\0&0&c&0\\0&0&0&d\end{Bmatrix}$$

a,b,c,d 可以为任意数字

  • 主对角线全为0的对角矩阵常称为零矩阵
  • a=b=c=d的对角矩阵常称为标量矩阵
  • a=b=c=d=1的对角矩阵常称为单位矩阵

对称矩阵

元素以主对角线为对称轴对应相等的矩阵。
如下:
$$\begin {Bmatrix}1&0&8&1\\0&2&5&7\\8&5&3&9\\1&7&9&4\end{Bmatrix}$$

$m{ij}=m{ji}$

转置矩阵

把矩阵A的行换成相应的列,得到的新矩阵称为A的转置矩阵,记作$A^T$或$A’$。
$$A=\begin {Bmatrix}1&0&8\\0&2&5\end{Bmatrix}$$
$$B=\begin {Bmatrix}1&0\\2&0\\8&5\end{Bmatrix}$$

B 是 A 的转置矩阵。

  • (A±B)’=A’±B’
  • (A×B)’= B’×A’
  • (A’)’=A
  • (λA’)’=λA
  • det(A’)=det(A),即转置矩阵的行列式不变

逆矩阵

给定一个 n 阶方阵A,若存在一n 阶方阵B, 使得$AB=BA=I_n$(或AB=In、BA=In 任满足一个),其中$I_n$ 为n 阶单位矩阵,则称A 是可逆的,且B 是A 的逆阵,记作 $A^{-1}$。
若方阵A 的逆阵存在,则称A 为 非奇异方阵可逆方阵满秩矩阵

给定一个 n 阶方阵 A,则下面的叙述都是等价的:
A 是可逆的。
A 的行列式不为零。
A 的秩等于 n(A 满秩)。
A 的转置矩阵 A也是可逆的。
AA 也是可逆的。
存在一 n 阶方阵 B 使得 AB = In。
存在一 n 阶方阵 B 使得 BA = In。

如果矩阵$A$可逆,则$A^{-1}=\frac {A^*} {|A|}$ , 其中$A^*$是的伴随矩阵

伴随矩阵$A^*$元素的排列特点是的第k列元素是A的第k行元素的代数余子式。
代数余子式定义:在一个n阶行列式A中,把(i,j)元 $a_{ij}$ 所在的第i行和第j 列划去后,留下来的n-1阶行列式叫做(i,j)元$a_{ij}$的余子式,记作$M_{ij}$;即
$A_{ij}=(-1)^{i+j}M_{ij}$,$A_{ij}$叫做(i,j)元$a_{ij}$的代数余子式。

A是可逆矩阵的充分必要条件是︱A︱≠0(方阵A的行列式不等于0)。(当︱A︱= 0时,A称为 奇异矩阵
可逆矩阵一定是方阵。
如果矩阵A是可逆的,A的逆矩阵是唯一的。
两个可逆矩阵的乘积依然可逆。
可逆矩阵的转置矩阵也可逆。
矩阵可逆当且仅当它是满秩矩阵。

正定矩阵

正交矩阵,即满足A乘以A的转置等于单位阵E;

  1. 方阵A正交的充要条件是A的行(列) 向量组是单位正交向量组(它们的内积等于零);
  2. 方阵A正交的充要条件是A的n个行(列)向量是n维向量空间的一组标准正交基;
  3. A是正交矩阵的充要条件是:A的行向量组两两正交且都是单位向量;
  4. A的列向量组也是正交单位向量组。
  5. 正交方阵是欧氏空间中标准正交基到标准正交基的过渡矩阵。

对称阵A正定的等价条件
1、对应的二次型正定
2、所有主子式大于0
3、所有顺序主子式大于
4、所有特征根大于0
正定的一个必要条件 :所有对角线上的元素全大于0(用于判定不正定时常用)

本文紧接着上一篇文章Spring mvc + Hibernate + Jpa + druid 全注解配置实例,推荐先掌握上一篇文章。

介绍

零配置是servlet3.0之后支持的方式,所有servlet的版本以及运行容器的servlet版本需要3.0+版本。
本文与前文不同的是没有配置文件,包括web.xml也是无内容的。替换方式是采用Java代码实现。pom.xml文件与上文一致,web.xml为空,spring相关配置全无。
包括初始化、数据源、mvc、容器等

初始化

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
import com.alibaba.druid.support.http.StatViewServlet;
import com.alibaba.druid.support.http.WebStatFilter;
import org.springframework.web.WebApplicationInitializer;
import org.springframework.web.context.ContextLoaderListener;
import org.springframework.web.context.support.AnnotationConfigWebApplicationContext;
import org.springframework.web.filter.CharacterEncodingFilter;
import org.springframework.web.servlet.DispatcherServlet;

import javax.servlet.*;

/**
* Created by Administrator on 2015/12/25.
*/

public class Initializer implements WebApplicationInitializer{
public void onStartup(ServletContext servletContext) throws ServletException {
AnnotationConfigWebApplicationContext ctx = new AnnotationConfigWebApplicationContext();
ctx.register(WebAppConfig.class);
servletContext.addListener(new ContextLoaderListener(ctx));

ctx.setServletContext(servletContext);

ServletRegistration.Dynamic servlet = servletContext.addServlet("dispatcher", new DispatcherServlet(ctx));
servlet.addMapping("/");
servlet.setLoadOnStartup(1);

CharacterEncodingFilter encodingFilter = new CharacterEncodingFilter();
encodingFilter.setEncoding("UTF-8");
encodingFilter.setForceEncoding(true);
FilterRegistration.Dynamic encodingServlet = servletContext.addFilter("encodingFilter", encodingFilter);
encodingServlet.addMappingForUrlPatterns(null,true,"/*");

WebStatFilter webStatFilter = new WebStatFilter();
FilterRegistration.Dynamic webStatServlet = servletContext.addFilter("DruidWebStatFilter", webStatFilter);
webStatServlet.setInitParameter("exclusions","*.js,*.gif,*.jpg,*.png,*.css,*.ico,/druid/*");
webStatServlet.addMappingForUrlPatterns(null,true,"/*");

ServletRegistration.Dynamic druidStatViewServlet = servletContext.addServlet("DruidStatView",new StatViewServlet());
druidStatViewServlet.addMapping("/druid/*");
}
}

可以和web.xml对照看。

spring 相关

包括两个文件,当然还可以更细分,用@import导入即可。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
import com.alibaba.druid.pool.DruidDataSource;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.context.annotation.*;
import org.springframework.core.env.Environment;
import org.springframework.orm.jpa.JpaTransactionManager;
import org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean;
import org.springframework.orm.jpa.vendor.HibernateJpaVendorAdapter;
import org.springframework.scheduling.quartz.SchedulerFactoryBean;
import org.springframework.transaction.annotation.EnableTransactionManagement;
import org.springframework.validation.beanvalidation.LocalValidatorFactoryBean;
import org.springframework.web.multipart.commons.CommonsMultipartResolver;
import org.springframework.web.servlet.config.annotation.EnableWebMvc;
import org.springframework.web.servlet.view.JstlView;
import org.springframework.web.servlet.view.UrlBasedViewResolver;

import javax.annotation.Resource;
import javax.sql.DataSource;
import java.sql.SQLException;
import java.util.Properties;

/**
* Created by Administrator on 2015/12/25.
*/

@Configuration
@ComponentScan("com.zzcm.tmp")
@EnableWebMvc
@EnableTransactionManagement(proxyTargetClass = true)
@PropertySource("classpath:db.properties")
@Import({MvcConfig.class})
public class WebAppConfig {

private static final Logger LOG = LoggerFactory.getLogger(WebAppConfig.class);

private static final String P_DB_DRIVER = "db.driver";
private static final String P_DB_PASSWORD = "db.password";
private static final String P_DB_URL = "db.url";
private static final String P_DB_USERNAME = "db.username";

private static final String P_DB_MAXACTIVE = "db.maxActive";
private static final String P_DB_INITSIZE = "db.initialSize";
private static final String P_DB_MAXWAIT = "db.maxWait";
private static final String P_DB_MINIDLE = "db.minIdle";
private static final String P_DB_TBERM = "db.timeBetweenEvictionRunsMillis";
private static final String P_DB_MEITM = "db.minEvictableIdleTimeMillis";
private static final String P_DB_VQ = "db.validationQuery";
private static final String P_DB_TESTWHILEIDEL = "db.testWhileIdle";
private static final String P_DB_TESTONBORROW = "db.testOnBorrow";
private static final String P_DB_TESTONRETURN = "db.testOnReturn";
private static final String P_DB_FILTERS = "db.filters";


private static final String P_HIBERNATE_DIALECT = "hibernate.dialect";
private static final String P_HIBERNATE_SHOW_SQL = "hibernate.show_sql";
private static final String P_ENTITYMANAGER_PACKAGES_TO_SCAN = "entitymanager.packages.to.scan";
private static final String P_HIBERNATE_MAX_FETCH_DEPTH = "hibernate.max_fetch_depth";
private static final String P_HIBERNATE_JDBC_FETCH_SIZE = "hibernate.jdbc.fetch_size";
private static final String P_HIBERNATE_JDBC_BATCH_SIZE = "hibernate.jdbc.batch_size";
private static final String P_HIBERNATE_FORMAT_SQL = "hibernate.format_sql";
private static final String P_HIBERNATE_CACHE_P_CLASS = "hibernate.cache.provider_class";
private static final String P_PERSISTENCE_VALI_MODE = "javax.persistence.validation.mode";
private static final String P_HIBERNATE_EJB_NAMING_STRATEGY = "hibernate.ejb.naming_strategy";

@Resource
private Environment env;


@Bean
public UrlBasedViewResolver setupViewResolver(){
UrlBasedViewResolver resolver = new UrlBasedViewResolver();
resolver.setPrefix("/WEB-INF/jsp/");
resolver.setSuffix(".jsp");
resolver.setViewClass(JstlView.class);
return resolver;
}

@Bean
public DataSource dataSource(){
DruidDataSource dataSource = new DruidDataSource();
dataSource.setDriverClassName(env.getRequiredProperty(P_DB_DRIVER));
dataSource.setUrl(env.getRequiredProperty(P_DB_URL));
dataSource.setUsername(env.getRequiredProperty(P_DB_USERNAME));
dataSource.setPassword(env.getRequiredProperty(P_DB_PASSWORD));

dataSource.setMaxActive(env.getRequiredProperty(P_DB_MAXACTIVE, Integer.class));
dataSource.setInitialSize(env.getRequiredProperty(P_DB_INITSIZE, Integer.class));
dataSource.setMaxWait(env.getRequiredProperty(P_DB_MAXWAIT, Integer.class));
dataSource.setMinIdle(env.getRequiredProperty(P_DB_MINIDLE, Integer.class));
dataSource.setTimeBetweenEvictionRunsMillis(env.getRequiredProperty(P_DB_TBERM, Integer.class));
dataSource.setMinEvictableIdleTimeMillis(env.getRequiredProperty(P_DB_MEITM, Long.class));
dataSource.setValidationQuery(env.getRequiredProperty(P_DB_VQ));
dataSource.setTestWhileIdle(env.getRequiredProperty(P_DB_TESTWHILEIDEL, Boolean.class));
dataSource.setTestOnBorrow(env.getRequiredProperty(P_DB_TESTONBORROW, Boolean.class));
dataSource.setTestOnReturn(env.getRequiredProperty(P_DB_TESTONRETURN, Boolean.class));
try {
dataSource.setFilters(env.getRequiredProperty(P_DB_FILTERS));
} catch (SQLException e) {
LOG.error("Create DataSource filters failed.",e);
}
return dataSource;
}

@Bean
public LocalContainerEntityManagerFactoryBean entityManagerFactory(){
LocalContainerEntityManagerFactoryBean bean = new LocalContainerEntityManagerFactoryBean();
bean.setDataSource(dataSource());
bean.setPackagesToScan(env.getRequiredProperty(P_ENTITYMANAGER_PACKAGES_TO_SCAN));
//b.setPersistenceUnitName("mysqldb");
HibernateJpaVendorAdapter adapter = new HibernateJpaVendorAdapter();
adapter.setShowSql(env.getRequiredProperty(P_HIBERNATE_SHOW_SQL,Boolean.class));
adapter.setDatabasePlatform(env.getRequiredProperty(P_HIBERNATE_DIALECT));
bean.setJpaVendorAdapter(adapter);
bean.setJpaProperties(jpaProperties());
return bean;
}

private Properties jpaProperties(){
Properties prop = new Properties();
prop.put(P_HIBERNATE_MAX_FETCH_DEPTH, env.getRequiredProperty(P_HIBERNATE_MAX_FETCH_DEPTH, Integer.class));
prop.put(P_HIBERNATE_JDBC_FETCH_SIZE, env.getRequiredProperty(P_HIBERNATE_JDBC_FETCH_SIZE, Integer.class));
prop.put(P_HIBERNATE_JDBC_BATCH_SIZE, env.getRequiredProperty(P_HIBERNATE_JDBC_BATCH_SIZE, Integer.class));
prop.put(P_HIBERNATE_SHOW_SQL, env.getRequiredProperty(P_HIBERNATE_SHOW_SQL, Boolean.class));
prop.put(P_HIBERNATE_FORMAT_SQL, env.getRequiredProperty(P_HIBERNATE_FORMAT_SQL, Boolean.class));
prop.put(P_HIBERNATE_CACHE_P_CLASS, env.getRequiredProperty(P_HIBERNATE_CACHE_P_CLASS));
prop.put(P_PERSISTENCE_VALI_MODE, env.getRequiredProperty(P_PERSISTENCE_VALI_MODE));
prop.put(P_HIBERNATE_EJB_NAMING_STRATEGY, env.getRequiredProperty(P_HIBERNATE_EJB_NAMING_STRATEGY));

return prop;
}

@Bean
public JpaTransactionManager transactionManager(){
JpaTransactionManager transactionManager = new JpaTransactionManager();
transactionManager.setEntityManagerFactory(entityManagerFactory().getObject());
return transactionManager;
}

@Bean
public SchedulerFactoryBean schedulerFactoryBean(){
return new SchedulerFactoryBean();
}

@Bean
public CommonsMultipartResolver multipartResolver(){
CommonsMultipartResolver resolver = new CommonsMultipartResolver();
resolver.setMaxUploadSize(104857600);
return resolver;
}

@Bean
public LocalValidatorFactoryBean validator(){
return new LocalValidatorFactoryBean();
}
}

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
import com.alibaba.fastjson.serializer.SerializerFeature;
import com.alibaba.fastjson.support.spring.FastJsonHttpMessageConverter;
import org.springframework.context.annotation.Configuration;
import org.springframework.http.MediaType;
import org.springframework.http.converter.HttpMessageConverter;
import org.springframework.http.converter.StringHttpMessageConverter;
import org.springframework.web.servlet.config.annotation.ContentNegotiationConfigurer;
import org.springframework.web.servlet.config.annotation.WebMvcConfigurerAdapter;

import java.nio.charset.Charset;
import java.util.ArrayList;
import java.util.List;

/**
* Created by Administrator on 2015/12/25.
*/

@Configuration
public class MvcConfig extends WebMvcConfigurerAdapter {

@Override
public void configureContentNegotiation(ContentNegotiationConfigurer configurer) {
configurer.mediaType("json", MediaType.valueOf("application/json"));
configurer.mediaType("xml",MediaType.valueOf("application/xml"));
configurer.mediaType("html",MediaType.valueOf("text/html"));
configurer.mediaType("*",MediaType.valueOf("*/*"));
}

@Override
public void configureMessageConverters(List<HttpMessageConverter<?>> converters) {
StringHttpMessageConverter stringConverter = new StringHttpMessageConverter(Charset.forName("UTF-8"));
List<MediaType> list = new ArrayList<MediaType>();
list.add(new MediaType("text","plain",Charset.forName("UTF-8")));
list.add(new MediaType("*","*",Charset.forName("UTF-8")));
stringConverter.setSupportedMediaTypes(list);

FastJsonHttpMessageConverter jsonConverter = new FastJsonHttpMessageConverter();
List<MediaType> jsonList = new ArrayList<MediaType>();
jsonList.add(MediaType.valueOf("application/json;charset=UTF-8"));
jsonList.add(MediaType.valueOf("text/plain;charset=utf-8"));
jsonList.add(MediaType.valueOf("text/html;charset=utf-8"));
jsonConverter.setSupportedMediaTypes(jsonList);
jsonConverter.setFeatures(new SerializerFeature[]{SerializerFeature.WriteDateUseDateFormat});

converters.add(stringConverter);
converters.add(jsonConverter);
}
}

这两个类就包含了spring-mvc.xml,spring-root.xml等相关配置。其中MvcConfig包含了对json的支持。
本处引用了db.properties,内容和上一篇一致。
其他的实现与上一篇文章一致。

源码可以到springmvc_noxml下载。

搭建一个模板,以备忘。

简介

本文搭建的是基于spring mvc的全注解的环境,集成jpa/hibernate/druid/fastjson,日志采用的是logback。
主要配置文件包括

//maven库
pom.xml
web.xml
//mvc相关配置
spring-mvc.xml
//spring容器配置
spring-root.xml
//日志配置
logback.xml
//数据库和其他配置文件
*.properties

配置篇

pom.xml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
<properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
<jdk-version>1.6</jdk-version>
<junit-version>4.11</junit-version>
<spring-version>3.2.4.RELEASE</spring-version>
</properties>

<dependencies>
<!-- spring begin-->
<dependency>
<groupId>org.springframework</groupId>
<artifactId>spring-core</artifactId>
<version>${spring-version}</version>
</dependency>
<dependency>
<groupId>org.springframework</groupId>
<artifactId>spring-beans</artifactId>
<version>${spring-version}</version>
</dependency>
<dependency>
<groupId>org.springframework</groupId>
<artifactId>spring-web</artifactId>
<version>${spring-version}</version>
</dependency>
<dependency>
<groupId>org.springframework</groupId>
<artifactId>spring-webmvc</artifactId>
<version>${spring-version}</version>
</dependency>
<dependency>
<groupId>org.springframework</groupId>
<artifactId>spring-context</artifactId>
<version>${spring-version}</version>
</dependency>
<dependency>
<groupId>org.springframework</groupId>
<artifactId>spring-test</artifactId>
<version>${spring-version}</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.springframework</groupId>
<artifactId>spring-context-support</artifactId>
<version>${spring-version}</version>
</dependency>
<dependency>
<groupId>org.springframework.data</groupId>
<artifactId>spring-data-jpa</artifactId>
<version>1.3.2.RELEASE</version>
</dependency>
<!-- spring end -->


<!-- servlet begin -->
<dependency>
<groupId>javax.servlet</groupId>
<artifactId>servlet-api</artifactId>
<version>3.0-alpha-1</version>
<scope>provided</scope>
</dependency>
<!-- servlet end -->


<!-- junit begin -->
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<version>${junit-version}</version>
<scope>test</scope>
</dependency>
<!-- junit end -->


<!-- hibernate begin -->
<dependency>
<groupId>org.hibernate</groupId>
<artifactId>hibernate-core</artifactId>
<version>4.2.1.Final</version>
</dependency>
<dependency>
<groupId>org.hibernate</groupId>
<artifactId>hibernate-entitymanager</artifactId>
<version>4.2.1.Final</version>
</dependency>
<dependency>
<groupId>org.hibernate.common</groupId>
<artifactId>hibernate-commons-annotations</artifactId>
<version>4.0.1.Final</version>
</dependency>
<dependency>
<groupId>org.hibernate</groupId>
<artifactId>hibernate-validator</artifactId>
<version>5.0.0.Final</version>
</dependency>
<!-- hibernate end -->


<!--logger begin -->
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-api</artifactId>
<version>1.7.5</version>
</dependency>

<dependency>
<groupId>ch.qos.logback</groupId>
<artifactId>logback-core</artifactId>
<version>1.1.3</version>
</dependency>
<dependency>
<groupId>ch.qos.logback</groupId>
<artifactId>logback-classic</artifactId>
<version>1.1.3</version>
</dependency>

<!-- logger end -->


<!-- database begin -->
<dependency>
<groupId>mysql</groupId>
<artifactId>mysql-connector-java</artifactId>
<version>5.1.25</version>
</dependency>
<!-- database end -->


<!-- apache commons begin -->
<!-- 文件上传 -->
<dependency>
<groupId>commons-fileupload</groupId>
<artifactId>commons-fileupload</artifactId>
<version>1.3.1</version>
</dependency>

<!-- apache commons end -->

<dependency>
<groupId>com.alibaba</groupId>
<artifactId>druid</artifactId>
<version>1.0.16</version>
</dependency>

<dependency>
<groupId>org.quartz-scheduler</groupId>
<artifactId>quartz</artifactId>
<version>2.2.2</version>
</dependency>

<dependency>
<groupId>com.jcraft</groupId>
<artifactId>jsch</artifactId>
<version>0.1.53</version>
</dependency>

<dependency>
<groupId>org.apache.ant</groupId>
<artifactId>ant</artifactId>
<version>1.9.6</version>
</dependency>

<dependency>
<groupId>jstl</groupId>
<artifactId>jstl</artifactId>
<version>1.2</version>
</dependency>

<dependency>
<groupId>com.alibaba</groupId>
<artifactId>fastjson</artifactId>
<version>1.2.7</version>
</dependency>
</dependencies>

web.xml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
<?xml version="1.0" encoding="UTF-8"?>
<web-app xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://java.sun.com/xml/ns/javaee" xmlns:web="http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd" xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_3_0.xsd" metadata-complete="true" version="3.0">
<display-name>Archetype Created Web Application</display-name>
<context-param>
<param-name>log4jConfigLocation</param-name>
<param-value>classpath*:/logback.xml</param-value>
</context-param>
<context-param>
<param-name>contextConfigLocation</param-name>
<param-value>classpath*:/spring-root.xml</param-value>
</context-param>

<!-- 設定Spring Context的默认Profile -->
<!--<context-param>-->
<!--<param-name>spring.profiles.default</param-name>-->
<!--<param-value>development</param-value>-->
<!--</context-param>-->

<listener>
<listener-class>org.springframework.web.context.ContextLoaderListener</listener-class>
</listener>

<!-- 配置spring mvc -->
<servlet>
<servlet-name>DispatcherServlet</servlet-name>
<servlet-class>org.springframework.web.servlet.DispatcherServlet</servlet-class>
<init-param>
<param-name>contextConfigLocation</param-name>
<param-value>classpath*:/spring-mvc.xml</param-value>
</init-param>
<load-on-startup>1</load-on-startup>
</servlet>
<servlet-mapping>
<servlet-name>DispatcherServlet</servlet-name>
<url-pattern>/</url-pattern>
</servlet-mapping>

<!-- 字符集 -->
<filter>
<filter-name>encodingFilter</filter-name>
<filter-class>org.springframework.web.filter.CharacterEncodingFilter</filter-class>
<init-param>
<param-name>encoding</param-name>
<param-value>UTF-8</param-value>
</init-param>
<init-param>
<param-name>forceEncoding</param-name>
<param-value>true</param-value>
</init-param>
</filter>
<filter-mapping>
<filter-name>encodingFilter</filter-name>
<url-pattern>/*</url-pattern>
</filter-mapping>

<!-- druid监控 -->
<filter>
<filter-name>DruidWebStatFilter</filter-name>
<filter-class>com.alibaba.druid.support.http.WebStatFilter</filter-class>
<init-param>
<param-name>exclusions</param-name>
<param-value>*.js,*.gif,*.jpg,*.png,*.css,*.ico,/druid/*</param-value>
</init-param>
</filter>
<filter-mapping>
<filter-name>DruidWebStatFilter</filter-name>
<url-pattern>/*</url-pattern>
</filter-mapping>

<!-- druid监控 -->
<servlet>
<servlet-name>DruidStatView</servlet-name>
<servlet-class>com.alibaba.druid.support.http.StatViewServlet</servlet-class>
</servlet>
<servlet-mapping>
<servlet-name>DruidStatView</servlet-name>
<url-pattern>/druid/*</url-pattern>
</servlet-mapping>

<welcome-file-list>
<welcome-file>/index.jsp</welcome-file>
</welcome-file-list>
</web-app>

spring-mvc.xml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
<?xml version="1.0" encoding="UTF-8"?>
<!-- Bean头部 -->
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:p="http://www.springframework.org/schema/p"
xmlns:mvc="http://www.springframework.org/schema/mvc"
xmlns:context="http://www.springframework.org/schema/context"
xmlns:util="http://www.springframework.org/schema/util"
xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.2.xsd
http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context-3.2.xsd
http://www.springframework.org/schema/mvc http://www.springframework.org/schema/mvc/spring-mvc-3.2.xsd
http://www.springframework.org/schema/util http://www.springframework.org/schema/util/spring-util-3.2.xsd">


<!--对包中的所有类进行扫描,以完成Bean创建和自动依赖注入的功能 需要更改-->
<context:component-scan base-package="com.zzcm.log.action" />

<bean id="viewResolver" class="org.springframework.web.servlet.view.InternalResourceViewResolver">
<property name="prefix">
<value>/WEB-INF/jsp/</value>
</property>
<property name="suffix">
<value>.jsp</value>
</property>
</bean>

<!-- 内容协商管理器 -->
<!--1、首先检查路径扩展名(如my.pdf);2、其次检查Parameter(如my?format=pdf);3、检查Accept Header-->
<bean id="contentNegotiationManager" class="org.springframework.web.accept.ContentNegotiationManagerFactoryBean">
<!-- 扩展名至mimeType的映射,即 /user.json => application/json -->
<!--<property name="favorPathExtension" value="false"/>-->
<!--&lt;!&ndash; 用于开启 /userinfo/123?format=json 的支持 &ndash;&gt;-->
<!--<property name="favorParameter" value="false"/>-->
<!--<property name="parameterName" value="format"/>-->
<!--&lt;!&ndash; 是否忽略Accept Header &ndash;&gt;-->
<!--<property name="ignoreAcceptHeader" value="false"/>-->

<property name="mediaTypes"> <!--扩展名到MIME的映射;favorPathExtension, favorParameter是true时起作用 -->
<value>
json=application/json
xml=application/xml
html=text/html
*=*/*
</value>
</property>
</bean>

<mvc:annotation-driven content-negotiation-manager="contentNegotiationManager">
<mvc:message-converters register-defaults="true">
<!--<ref bean="stringHttpMessageConverter" />-->
<!--<ref bean="fastJsonHttpMessageConverter" />-->
<!-- StringHttpMessageConverter编码为UTF-8,防止乱码 -->
<bean class="org.springframework.http.converter.StringHttpMessageConverter">
<constructor-arg value="UTF-8"/>
<property name = "supportedMediaTypes">
<list>
<bean class="org.springframework.http.MediaType">
<constructor-arg index="0" value="text"/>
<constructor-arg index="1" value="plain"/>
<constructor-arg index="2" value="UTF-8"/>
</bean>
<bean class="org.springframework.http.MediaType">
<constructor-arg index="0" value="*"/>
<constructor-arg index="1" value="*"/>
<constructor-arg index="2" value="UTF-8"/>
</bean>
</list>
</property>
</bean>

<!-- 避免IE执行AJAX时,返回JSON出现下载文件 -->
<bean id="fastJsonHttpMessageConverter" class="com.alibaba.fastjson.support.spring.FastJsonHttpMessageConverter">
<property name="supportedMediaTypes">
<list>
<value>application/json;charset=UTF-8</value>
<value>text/plain;charset=utf-8</value>
<value>text/html;charset=utf-8</value>
</list>
</property>
<property name="features">
<value type="com.alibaba.fastjson.serializer.SerializerFeature">WriteDateUseDateFormat</value>
<!--这个地方加上这个功能吧,能自己配置一些东西,比如时间的格式化,null输出""等等-->
</property>
</bean>
</mvc:message-converters>
</mvc:annotation-driven>
</beans>

spring-root.xml及相关文件

spring-root.xml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:context="http://www.springframework.org/schema/context"
xmlns:jdbc="http://www.springframework.org/schema/jdbc"
xmlns:jee="http://www.springframework.org/schema/jee"
xmlns:tx="http://www.springframework.org/schema/tx"
xmlns:jpa="http://www.springframework.org/schema/data/jpa"
xmlns:aop="http://www.springframework.org/schema/aop"
xsi:schemaLocation="
http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.2.xsd
http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context-3.2.xsd
http://www.springframework.org/schema/jdbc http://www.springframework.org/schema/jdbc/spring-jdbc-3.2.xsd
http://www.springframework.org/schema/jee http://www.springframework.org/schema/jee/spring-jee-3.2.xsd
http://www.springframework.org/schema/tx http://www.springframework.org/schema/tx/spring-tx-3.2.xsd
http://www.springframework.org/schema/aop http://www.springframework.org/schema/aop/spring-aop-3.2.xsd
http://www.springframework.org/schema/data/jpa http://www.springframework.org/schema/data/jpa/spring-jpa.xsd"

default-lazy-init="false">


<!-- 加载配置属性文件 ,获取jdbc等相关信息,db中已经有了,本处就省略了 -->
<!--<bean class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer">-->
<!--<property name="locations">-->
<!--<list>-->
<!--<value>classpath*:db.properties</value>-->
<!--</list>-->
<!--</property>-->
<!--</bean>-->

<import resource="classpath*:spring-db.xml"/>

<!-- 使用annotation 自动注册bean, 并保证@Required、@Autowired的属性被注入 -->
<context:component-scan base-package="com.zzcm.log">
<context:exclude-filter type="annotation" expression="org.springframework.stereotype.Controller"/>
</context:component-scan>

<!-- JSR303 Validator定义 -->
<bean id="validator" class="org.springframework.validation.beanvalidation.LocalValidatorFactoryBean" />

<!-- 支持上传文件 -->
<bean id="multipartResolver"
class="org.springframework.web.multipart.commons.CommonsMultipartResolver">

<property name="maxUploadSize" value="104857600"/>
</bean>


<!-- AOP自动注解功能 -->
<!--<aop:aspectj-autoproxy />-->
</beans>

spring-db.xml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:context="http://www.springframework.org/schema/context"
xmlns:jdbc="http://www.springframework.org/schema/jdbc"
xmlns:jee="http://www.springframework.org/schema/jee"
xmlns:tx="http://www.springframework.org/schema/tx"
xmlns:jpa="http://www.springframework.org/schema/data/jpa"
xsi:schemaLocation="
http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.2.xsd
http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context-3.2.xsd
http://www.springframework.org/schema/jdbc http://www.springframework.org/schema/jdbc/spring-jdbc-3.2.xsd
http://www.springframework.org/schema/jee http://www.springframework.org/schema/jee/spring-jee-3.2.xsd
http://www.springframework.org/schema/tx http://www.springframework.org/schema/tx/spring-tx-3.2.xsd
http://www.springframework.org/schema/data/jpa http://www.springframework.org/schema/data/jpa/spring-jpa.xsd"

default-lazy-init="false">

<!-- 引入属性文件 -->
<context:property-placeholder location="classpath:db.properties" />

<!-- mysql数据源配置 -->
<bean id="mysqlDataSource" class="com.alibaba.druid.pool.DruidDataSource"
init-method="init" destroy-method="close">

<!-- 驱动名称 -->
<property name="DriverClassName" value="${jdbc.driver}" />
<!-- JDBC连接串 -->
<property name="url" value="${jdbc.url}" />
<!-- 数据库用户名称 -->
<property name="username" value="${jdbc.username}" />
<!-- 数据库密码 -->
<property name="password" value="${jdbc.password}" />
<!-- 连接池最大使用连接数量 -->
<property name="maxActive" value="20" />
<!-- 初始化大小 -->
<property name="initialSize" value="5" />
<!-- 获取连接最大等待时间 -->
<property name="maxWait" value="60000" />
<!-- 连接池最小空闲 -->
<property name="minIdle" value="2" />
<!-- 逐出连接的检测时间间隔 -->
<property name="timeBetweenEvictionRunsMillis" value="3000" />
<!-- 最小逐出时间 -->
<property name="minEvictableIdleTimeMillis" value="300000" />
<!-- 测试有效用的SQL Query -->
<property name="validationQuery" value="SELECT 'x'" />
<!-- 连接空闲时测试是否有效 -->
<property name="testWhileIdle" value="true" />
<!-- 获取连接时测试是否有效 -->
<property name="testOnBorrow" value="false" />
<!-- 归还连接时是否测试有效 -->
<property name="testOnReturn" value="false" />
<!-- 配置监控统计拦截的filters -->
<property name="filters" value="wall,stat" />
</bean>

<!-- 整合mysqljpa -->
<bean id="entityManagerFactory" class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean">
<property name="dataSource" ref="mysqlDataSource"></property>
<property name="packagesToScan" value="com.zzcm.log"></property>
<property name="persistenceUnitName" value="mysqldb"></property>
<property name="jpaVendorAdapter">
<bean class="org.springframework.orm.jpa.vendor.HibernateJpaVendorAdapter">
<property name="showSql" value="true"></property>
<property name="databasePlatform" value="org.hibernate.dialect.MySQLDialect" />
</bean>
</property>
<property name="jpaProperties">
<props>
<!--设置外连接抓取树的最大深度 -->
<prop key="hibernate.max_fetch_depth">3</prop>
<prop key="hibernate.jdbc.fetch_size">18</prop>
<prop key="hibernate.jdbc.batch_size">10</prop>
<!-- 自动建表类型 validate|create|create-drop|update -->
<!-- <prop key="hibernate.hbm2ddl.auto">validate</prop> -->
<!-- 是否显示SQL -->
<prop key="hibernate.show_sql">true</prop>
<!-- 显示SQL是否格式化 -->
<prop key="hibernate.format_sql">false</prop>
<!-- 关闭二级缓存 -->
<prop key="hibernate.cache.provider_class">org.hibernate.cache.NoCacheProvider</prop>
<!-- 关闭实体字段映射校验 -->
<prop key="javax.persistence.validation.mode">none</prop>
<prop key="hibernate.ejb.naming_strategy">org.hibernate.cfg.ImprovedNamingStrategy</prop>
</props>
</property>
</bean>

<!-- Jpa 事务配置 -->
<bean id="transactionManager" class="org.springframework.orm.jpa.JpaTransactionManager">
<property name="entityManagerFactory" ref="entityManagerFactory"/>
</bean>

<!-- Spring Data Jpa配置 -->
<!--
<jpa:repositories base-package="cn.ibeans" transaction-manager-ref="transactionManager" entity-manager-factory-ref="entityManagerFactory"/>
-->

<!-- 使用annotation定义事务 -->
<tx:annotation-driven transaction-manager="transactionManager" proxy-target-class="true" />
</beans>

logback.xml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
<?xml version="1.0"?>
<configuration>
<contextName>LogMng</contextName>
<property name="logname" value="logmng"/>
<timestamp key="bySecond" datePattern="yyyyMMdd'T'HHmmss"/>
<!-- ch.qos.logback.core.ConsoleAppender 控制台输出 -->
<appender name="console" class="ch.qos.logback.core.ConsoleAppender">
<encoder>
<pattern>[%-5level] %d{HH:mm:ss.SSS} [%thread] %logger{36} - %msg%n
</pattern>
</encoder>
</appender>

<!-- ch.qos.logback.core.rolling.RollingFileAppender 文件日志输出 -->
<appender name="file" class="ch.qos.logback.core.rolling.RollingFileAppender">
<Encoding>UTF-8</Encoding>
<File>/home/logs/${logname}/${logname}.log</File>
<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
<FileNamePattern>/home/logs/${logname}/${logname}-%d{yyyy-MM-dd}.log
</FileNamePattern>
<MaxHistory>30</MaxHistory>
<!--<TimeBasedFileNamingAndTriggeringPolicy-->
<!--class="ch.qos.logback.core.rolling.SizeAndTimeBasedFNATP">-->
<!--<MaxFileSize>5MB</MaxFileSize>-->
<!--</TimeBasedFileNamingAndTriggeringPolicy>-->
</rollingPolicy>
<layout class="ch.qos.logback.classic.PatternLayout">
<pattern>[%-5level] %d{HH:mm:ss.SSS} [%thread] %logger{36} - %msg%n
</pattern>
</layout>
</appender>

<logger name="com.xxxx.log" level="DEBUG">
<appender-ref ref="console" />
<appender-ref ref="file" />
</logger>

<!-- 日志级别 -->
<root>
<!-- 定义了ERROR和INFO级别的日志,分别在FILE文件和控制台输出 -->
<level value="error" />
<level value="info" />
<appender-ref ref="file" />
<appender-ref ref="console" />
</root>

</configuration>

其他properties

db.properties

1
2
3
4
jdbc.driver=com.mysql.jdbc.Driver
jdbc.url=jdbc:mysql://ip:3306/dbname?autoReconnect=true&amp;useUnicode=true&amp;characterEncoding=utf8
jdbc.username=user
jdbc.password=password

实现篇

包括实体的定义,dao,service,controller

Entity

Task.java

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
@Entity
@Table(name = "t_task")
public class Task {
@Id
@GeneratedValue(strategy= GenerationType.IDENTITY)
private Integer id;

private String name;
@Column(name = "group_name")
private String group;

@JoinColumn(name = "n_from",referencedColumnName = "id")
@ManyToOne
private Node from;
@JoinColumn(name = "n_to",referencedColumnName = "id")
@ManyToOne
private Node to;
@Column(name = "crontime")
private String cronTime;

private Boolean enable;

private Boolean sync;

@Column(name = "dir_from")
private String fromDir;
@Column(name = "filename")
private String fileName;
@Column(name = "dir_to")
private String toDir;

//@Transient
@Enumerated(value = EnumType.ORDINAL)
private Status status = Status.STOP;

//TOADD getter setter

public static enum Status{
STOP(0,"停止"),
RUN(1,"运行"),
PAUSE(2,"暂停");
private int code;
private String desc;
private Status(int code,String desc){
this.code = code;
this.desc = desc;
}

//TOADD getter setter
}
}

TOADD 的自己补全。

dao层

BaseDao.java

1
2
3
4
5
6
7
8
9
10
11
12
13
14
public interface BaseDao<T,K extends Serializable> {

public T findById(K id);

public T saveOrUpdate(T bean);

public boolean deleteById(K id);

public boolean delete(T bean);

public T update(T bean);

public List<T> getAll();
}

BaseDaoImpl.java

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

import javax.persistence.EntityManager;
import javax.persistence.PersistenceContext;
import javax.persistence.Query;
import java.io.Serializable;
import java.lang.reflect.ParameterizedType;
import java.util.List;

/**
* Created by Administrator on 2015/12/22.
*/

public abstract class BaseDaoImpl<T,K extends Serializable> implements BaseDao<T,K>{
protected final Logger LOGGER = LoggerFactory.getLogger(getBeanClass());

@PersistenceContext
protected EntityManager em;

protected Class<T> beanClass;

@Override
public T findById(K id) {
try {
return em.find(getBeanClass(),id);
}catch (Exception e){
LOGGER.error("find bean by id["+id+"] failed.",e);
}
return null;
}

@Override
public T saveOrUpdate(T bean) {
try {
em.persist(bean);
return bean ;
} catch (Exception e) {
LOGGER.error("saveOrUpdate bean["+bean+"] failed.",e);
}
return null;
}

@Override
public boolean deleteById(K id) {
try {
T bean = findById(id);
if(null==bean) return false;
em.remove(bean);
return true ;
} catch (Exception e) {
LOGGER.error("delete bean by id["+id+"] failed.",e);
}
return false ;
}

@Override
public boolean delete(T bean) {
try {
em.remove(bean);
return true ;
} catch (Exception e) {
LOGGER.error("delete bean["+bean+"] failed.",e);
}
return false ;
}

@Override
public T update(T bean) {
try {
return em.merge(bean);
} catch (Exception e) {
LOGGER.error("update bean["+bean+"] failed.",e);
}
return null;
}

@Override
public List<T> getAll() {
try {
Query query = em.createQuery("from "+getBeanClass().getSimpleName());
return query.getResultList();
}catch (Exception e){
LOGGER.error("getEnableTasks failed.",e);
}
return null;
}

protected Class<T> getBeanClass(){
if(null == beanClass){
ParameterizedType parameterizedType = (ParameterizedType)this.getClass().getGenericSuperclass();
beanClass = (Class<T>)parameterizedType.getActualTypeArguments()[0];
}
return beanClass;
}
}

TaskDao.java

1
2
3
public interface TaskDao extends BaseDao<Task,Integer>{
public List<Task> getEnableTasks();
}

TaskDaoImpl.java

1
2
3
4
5
6
7
8
9
10
11
12
13
14
@Repository
public class TaskDaoImpl extends BaseDaoImpl<Task,Integer> implements TaskDao{

@Override
public List<Task> getEnableTasks() {
try {
Query query = em.createQuery("from Task where enable = true");
return query.getResultList();
}catch (Exception e){
LOGGER.error("getEnableTasks failed.",e);
}
return null;
}
}

service层

BaseService.java

1
2
3
4
5
6
7
8
9
10
11
12
13
public interface BaseService<T,K extends Serializable> {
public T findById(K id);

public T saveOrUpdate(T bean);

public boolean deleteById(K id);

public boolean delete(T bean);

public T update(T bean);

public List<T> getAll();
}

BaseServiceImpl.java

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
import org.springframework.transaction.annotation.Transactional;

import java.io.Serializable;
import java.util.List;

/**
* Created by Administrator on 2015/12/24.
*/

public abstract class BaseServiceImpl<T,K extends Serializable> implements BaseService<T,K>{

@Override
public T findById(K id) {
return getBaseDao().findById(id);
}

@Override
@Transactional
public T saveOrUpdate(T bean) {
return getBaseDao().saveOrUpdate(bean);
}

@Override
@Transactional
public boolean deleteById(K id) {
return getBaseDao().deleteById(id);
}

@Override
@Transactional
public boolean delete(T bean) {
return getBaseDao().delete(bean);
}

@Override
@Transactional
public T update(T bean) {
return getBaseDao().update(bean);
}

@Override
public List<T> getAll() {
return getBaseDao().getAll();
}

protected abstract BaseDao<T,K> getBaseDao();
}

TaskService.java

1
2
3
public interface TaskService extends BaseService<Task,Integer>{

}

TaskServiceImpl.java

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Component;

import java.util.List;

/**
* Created by Administrator on 2015/12/23.
*/

@Component
public class TaskServiceImpl extends BaseServiceImpl<Task,Integer> implements TaskService{
private static final Logger LOG = LoggerFactory.getLogger(TaskServiceImpl.class);

@Autowired
private TaskDao dao;

@Override
protected BaseDao<Task, Integer> getBaseDao() {
return dao;
}
}

controller

TaskAct.java

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Controller;
import org.springframework.ui.Model;
import org.springframework.ui.ModelMap;
import org.springframework.web.bind.annotation.*;
import org.springframework.web.servlet.mvc.support.RedirectAttributes;

import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
import java.util.List;

/**
* Created by Administrator on 2015/12/23.
*/
@Controller
@RequestMapping(value = "/task")
public class TaskAct {

private static Logger log = LoggerFactory.getLogger(TaskAct.class);

@Autowired
private TaskService taskService;
@Autowired
private NodeService nodeService;

@RequestMapping(value="/{id}")
public @ResponseBody Task json(@PathVariable Integer id,ModelMap model,HttpServletRequest request,
HttpServletResponse response){
log.info("id2:"+id);
//SFTPUtil.download("192.168.0.180", "root", "zzcm2014", "/opt", "cid.jar", "D:/test/aaaa.jar");
//ResponseUtils.renderJson(response, "true");
return taskService.findById(1);
}

@RequestMapping(value="/")
public String list(Model model,RedirectAttributes attr){
List<Task> tasks = taskService.getAll();
model.addAttribute("tasks",tasks);
//attr.addFlashAttribute(user);
//attr.addFlashAttribute("user",user);
return "/task/list";
}

@RequestMapping(value="/add",method = RequestMethod.GET)
public String add(Model model){
List<Node> nodes =nodeService.getAllNodes();
model.addAttribute("nodes",nodes);
return "/task/add";
}

@RequestMapping(value="/add",method = RequestMethod.POST)
public String add(@ModelAttribute Task task, Model model){
Node node = nodeService.findById(task.getFrom().getId());
task.setFrom(node);
taskService.insertTask(task);
return "redirect:/task/";
}

@RequestMapping(value="/update/{id:\\d+}",method = RequestMethod.GET)
public String update(@PathVariable Integer id,Model model){
List<Node> nodes =nodeService.getAllNodes();
Task task = taskService.findById(id);
if(task.getStatus() != Task.Status.STOP){
return "redirect:/task/";
}
model.addAttribute("task",task);
model.addAttribute("nodes",nodes);
return "/task/add";
}

@RequestMapping(value="/update",method = RequestMethod.POST)
public String update(@ModelAttribute Task task, Model model){
//Node node = nodeService.findById(task.getFrom().getId());
//task.setFrom(node);
taskService.saveOrUpdate(task);
return "redirect:/task/";
}

@RequestMapping(value="/delete/{id}")
public String delete(@PathVariable Integer id, Model model){
taskService.deleteTask(id);
return "redirect:/task/";
}
}

其中第一个方法(json)是JSON返回。

jsp

jsp文件放置位置由spring-mvc.xml和controller返回值共同决定。

list.jsp

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
<%@ page contentType="text/html;charset=UTF-8" language="java" %>
<%@ taglib uri="http://java.sun.com/jsp/jstl/core" prefix="c" %>
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd">
<html>
<head>
<title>任务列表</title>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
<%--<script src="/js/jquery-1.11.3.js" style="text/javascript"/>--%>
</head>
<body>
<a href="/task/add">添加任务</a>
<table border="1">
<thead>
<td>ID</td>
<td>名称</td>
<td>组名</td>
<td>源地址</td>
<td>时间</td>
<td>状态</td>
<td>是否启用</td>
<td>是否异步</td>
<td>操作</td>
</thead>
<%--<c:if test="${not empty tasks}">--%>
<%--sssss--%>
<%--</c:if>--%>
<c:forEach items="${tasks}" var="task" varStatus="vs">
<tr>
<td align = "center">${task.id}</td>
<td align = "center">${task.name}</td>
<td align = "center">${task.group}</td>
<td align = "center">${task.from.name}</td>
<td align = "center">${task.cronTime}</td>
<td align = "center">${task.status.desc}${task.status.code}</td>
<td align = "center">${task.enable}</td>
<td align = "center">${task.sync}</td>
<td align = "center">
<c:if test="${(empty task.status or task.status.code==0) and task.enable}">
<a href="/task/disable/${task.id}">禁用</a>
<a href="/task/start/${task.id}">启动</a>
</c:if>
<c:if test="${(empty task.status or task.status.code==0) and !task.enable}">
<a href="/task/enable/${task.id}">启用</a>
</c:if>
<c:if test="${(empty task.status or task.status.code==0)}">
<a href="/task/update/${task.id}">更新</a>
<a href="/task/delete/${task.id}">删除</a>
</c:if>
<c:if test="${task.status.code==1}">
<a href="/task/pause/${task.id}">暂停</a>
<a href="/task/stop/${task.id}">停止</a>
</c:if>
<c:if test="${task.status.code==2}">
<a href="/task/resume/${task.id}">恢复</a>
<a href="/task/stop/${task.id}">停止</a>
</c:if>
</td>
</tr>
</c:forEach>
</table>
</body>
</html>

add.jsp

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
<%@ page contentType="text/html;charset=UTF-8" language="java" %>
<%@ taglib uri="http://java.sun.com/jsp/jstl/core" prefix="c" %>
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd">
<html>
<head>
<title>添加任务</title>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
<%--<script src="/js/jquery-1.11.3.js" style="text/javascript"/>--%>
</head>
<body>
<form <c:choose><c:when test="${not empty task.id}">action="/task/update"</c:when>
<c:otherwise>action="/task/add"</c:otherwise></c:choose> method="post">
<input type="hidden" name="id" value="${task.id}">
<table>
<tr>
<td align = "center">任务名</td>
<td>
<input type="text" name="name" value="${task.name}">
</td>
</tr>
<tr>
<td align = "center">任务组</td>
<td>
<input type="text" name="group" value="${task.group}">
</td>
</tr>
<tr>
<td align = "center">源地址</td>
<td>
<select name="from.id" >
<c:forEach items="${nodes}" var="node" varStatus="vs">
<option value="${node.id }">
${node.name}
</option>
</c:forEach>
</select>
</td>
</tr>
<tr>
<td align = "center">源文件夹</td>
<td>
<input type="text" name="fromDir" value="${task.fromDir}">
</td>
</tr>
<tr>
<td align = "center">文件名</td>
<td>
<input type="text" name="fileName" value="${task.fileName}">
</td>
</tr>
<tr>
<td align = "center">目标文件夹</td>
<td>
<input type="text" name="toDir" value="${task.toDir}">
</td>
</tr>
<tr>
<td align = "center">执行时间表达式</td>
<td>
<input type="text" name="cronTime" value="${task.cronTime}">
</td>
</tr>
<tr>
<td align = "center">是否启用</td>
<td>
<input type="radio" name="enable" value="true" <c:if test="${task.enable}">checked="checked"</c:if>>是
<input type="radio" name="enable" value="false" <c:if test="${!task.enable}">checked="checked"</c:if>>否
</td>
</tr>
<tr>
<td align = "center">是否同步</td>
<td>
<input type="radio" name="sync" value="true" <c:if test="${task.sync}">checked="checked"</c:if>>是
<input type="radio" name="sync" value="false" <c:if test="${!task.sync}">checked="checked"</c:if>>否
</td>
</tr>
<tr>
<td colspan="2">
<input type="submit" value="确定">
</td>
</tr>
</table>
</form>
</body>
</html>

源码可以到LogMng下载,项目实现的是动态执行远程SSH复制文件任务,并支持任务的动态添加、删除、暂停、继续、停止等。

正好项目中有用到imsi与msisdn对应规则,在此记录一下。

简介

  • IMSI(International Mobile Subscriber Identity):
    IMSI=MCC(移动国家码)+MNC(移动网号)+MSIN( PLMN中唯一识别MS)
  • MSISDN(Mobile Subscriber International ISDN/PSTN number):
    MSISDN=CC(国家码)+NDC(国内接入号)+SN(用户号码)

    IMSI

    IMSI是区别移动用户的标志,储存在SIM卡中,可用于区别移动用户的有效信息。其总长度不超过15位,同样使用0~9的数字。

    MCC

    MCC是移动用户所属国家代号,占3位数字,中国的MCC规定为460。

    MNC

    MNC为移动网络码,共2-3位,中国国内当前如为下表所示:
运营商 MNC号码
移动 00, 02, 07
联通 01, 06
电信 03, 05
电信4G 11
铁通 20

MSIN

MSIN为移动用户识别号码,10-11位,其结构如下:
$$CC+M0M1M2M3+ABCD$$
CC由不同运营商分配,其中的M0M1M2M3和MDN号码中的H0H1H2H3可存在对应关系,ABCD四位为自由分配。

MSISDN

其实MSISDN就是我们平时所说的手机号码。
$$MSISDN=CC+NDC+SN$$

其中:
CC=国家码 (中国为86)
NDC=国内目的码
SN=用户号码

NDC为:
每个GSM的网络均分配一个国内目的码(NDC)。也可以要求分配两个以上的NDC号。MSISDN的号长是可变的(取决于网络结构与编号计划),不包括字冠,最长可以达到15位,我国GSM的国内身份号码为11位。
$$NDC=N1N2N3(接入号)+H1H2H3H4(HLR的识别号)$$
接入号用于识别网络,我们目前采用:139、138……。
HLR识别号表示用户归属的HLR,也表示移动业务本地网号。

例如:133-3333

IMSI与MSISDN对应关系

切记IMSI与MSISDN不一定完全有对应关系,但是有一些可以作为借鉴。
当前收集了移动和联通的对应关系,其他暂时无法获取,且无法推导。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
val s130 = """^46001(\d{3})(\d)[0,1]\d+""".r
val s131 = """^46001(\d{3})(\d)9\d+""".r
val s132 = """^46001(\d{3})(\d)2\d+""".r
val s134 = """^460020(\d)(\d{3})\d+""".r
val s13x0 = """^46000(\d{3})([5,6,7,8,9])\d+""".r
val s13x = """^46000(\d{3})([0,1,2,3,4])(\d)\d+""".r
val s150 = """^460023(\d)(\d{3})\d+""".r
val s151 = """^460021(\d)(\d{3})\d+""".r
val s152 = """^460022(\d)(\d{3})\d+""".r
val s155 = """^46001(\d{3})(\d)4\d+""".r
val s156 = """^46001(\d{3})(\d)3\d+""".r
val s157 = """^460077(\d)(\d{3})\d+""".r
val s158 = """^460028(\d)(\d{3})\d+""".r
val s159 = """^460029(\d)(\d{3})\d+""".r
val s147 = """^460079(\d)(\d{3})\d+""".r
val s185 = """^46001(\d{3})(\d)5\d+""".r
val s186 = """^46001(\d{3})(\d)6\d+""".r
val s187 = """^460027(\d)(\d{3})\d+""".r
val s188 = """^460078(\d)(\d{3})\d+""".r
val s1705 = """^460070(\d)(\d{3})\d+""".r
val s170x = """^46001(\d{3})(\d)8\d+""".r
val s178 = """^460075(\d)(\d{3})\d+""".r
val s145 = """^46001(\d{3})(\d)7\d+""".r
val s182 = """^460026(\d)(\d{3})\d+""".r
val s183 = """^460025(\d)(\d{3})\d+""".r
val s184 = """^460024(\d)(\d{3})\d+""".r

val calphone = imsi match {
case s130(bcd,a) => "130"+a+bcd
case s131(bcd,a) => "131"+a+bcd
case s132(bcd,a) => "132"+a+bcd
case s134(a,bcd) => "134"+a+bcd
case s13x0(bcd,s) => "13"+s+"0"+bcd
case s13x(bcd,s,a) => "13"+(s.toInt+5)+a+bcd
case s150(a,bcd) => "150"+a+bcd
case s151(a,bcd) => "151"+a+bcd
case s152(a,bcd) => "152"+a+bcd
case s155(bcd,a) => "155"+a+bcd
case s156(bcd,a) => "156"+a+bcd
case s157(a,bcd) => "157"+a+bcd
case s158(a,bcd) => "158"+a+bcd
case s159(a,bcd) => "159"+a+bcd
case s147(a,bcd) => "147"+a+bcd
case s185(bcd,a) => "185"+a+bcd
case s186(bcd,a) => "186"+a+bcd
case s187(a,bcd) => "187"+a+bcd
case s188(a,bcd) => "188"+a+bcd
case s1705(a,bcd) => "170"+a+bcd
case s170x(bcd,a) => "170"+a+bcd
case s178(a,bcd) => "178"+a+bcd
case s145(bcd,a) => "145"+a+bcd
case s182(a,bcd) => "182"+a+bcd
case s183(a,bcd) => "183"+a+bcd
case s184(a,bcd) => "184"+a+bcd
case _ => "0"
}

再次提醒,当前两者之间关系并非一定如此,仅作参考。

当前使用的SparkSQL版本为1.5.2

MySQL读取数据

两种方式去读取

  1. format
  2. jdbc

    方法一

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    val conf = new SparkConf().setAppName("SQL")
    val sc = new SparkContext(conf)
    val sqlContext = new SQLContext(sc)

    val srcData = sqlContext.read.format("jdbc").options(Map(
    "url"->"jdbc:mysql://IP:PORT/DBNAME",
    "dbtable"->"TABLE_NAME",
    "driver"->"com.mysql.jdbc.Driver",
    "user"->"USER",
    "password"->"PWD")).load()
    val data = srcData.select("name","age").show()

注:将IP,PORT,DBNAME,TABLE_NAME,USER,PWD换成自己的

方法二

1
2
3
4
5
6
7
8
9
10
11
val conf = new SparkConf().setAppName("SQL")
val sc = new SparkContext(conf)
val sqlContext = new SQLContext(sc)

val properties = new Properties()
properties.setProperty("user","USER")
properties.setProperty("password","PWD")

val dataFrame = sqlContext.read.jdbc("jdbc:mysql://IP:PORT/DBNAME",
"TABLE_NAME",properties)
dataFrame.select("name","age").show()

注:将IP,PORT,DBNAME,TABLE_NAME,USER,PWD换成自己的

MySQL写入数据

本例采用RDD存入数据库的方式演示。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
val conf = new SparkConf().setAppName("SQL")
val sc = new SparkContext(conf)
val sqlContext = new SQLContext(sc)
case class User(name:String,age:Int)

val properties = new Properties()
properties.setProperty("user","USER")
properties.setProperty("password","PWD")

val users = sc.parallelize(1 to 10).map(f=>User("张"+f,20+f)).map(f=>Row(f.name,f.age))

val schema = StructType(Array(StructField("name",StringType,true),
StructField("age",IntegerType,true)))
sqlContext.createDataFrame(users,schema).write.mode(SaveMode.Append)
.jdbc("jdbc:mysql://IP:PORT/DBNAME","TABLE_NAME",properties)

注:将IP,PORT,DBNAME,TABLE_NAME,USER,PWD换成自己的

mode选择

其中mode表示采用的是什么方式存表
SaveMode.Append:会将新的数据加在原来的数据后面
SaveMode.Overwrite:会删除原表数据
SaveMode.ErrorIfExists:会抛出 Table user already exists 异常,默认为此状态
SaveMode.Ignore:如果当前表有数据,新数据会被丢弃

StructType

StructType接受集合类型的StructField参数
StructField有以下四个字段

1
2
3
4
5
6
7
8
9
case class StructField(
//字段名
name: String,
//类型
dataType: DataType,
//是否允许为空
nullable: Boolean = true,
//元数据
metadata: Metadata = Metadata.empty)

DataType包含以下几类

1
2
3
4
5
6
7
8
9
10
11
12
StringType
FloatType
IntegerType
ByteType
ShortType
DoubleType
LongType
BinaryType
BooleanType
DateType
DecimalType
TimestampType

DataType与Mysql数据类型的对应关系如下

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
field.dataType match {
case IntegerType => "INTEGER"
case LongType => "BIGINT"
case DoubleType => "DOUBLE PRECISION"
case FloatType => "REAL"
case ShortType => "INTEGER"
case ByteType => "BYTE"
case BooleanType => "BIT(1)"
case StringType => "TEXT"
case BinaryType => "BLOB"
case TimestampType => "TIMESTAMP"
case DateType => "DATE"
case t: DecimalType => s"DECIMAL(${t.precision},${t.scale})"
case _ => throw new IllegalArgumentException(s"Don't know how to save $field to JDBC")
})

注:其中String对应的是TEXT,当前没有Varchar类型,也没有长度。