jvm设置
1 | export CUDA_VISIBLE_DEVICES=1 |
指定GPU显卡索引1
export CUDA_VISIBLE_DEVICES=-1
不使用GPU
POM
maven根据环境进行GPU或CPU的包引入进行切换1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37<profiles>
<profile>
<id>dev</id>
<activation>
<activeByDefault>true</activeByDefault>
</activation>
<properties>
<profileActive>dev</profileActive>
</properties>
<dependencies>
<dependency>
<groupId>org.tensorflow</groupId>
<artifactId>tensorflow</artifactId>
<version>1.15.0</version>
</dependency>
</dependencies>
</profile>
<profile>
<id>prod</id>
<properties>
<profileActive>prod</profileActive>
</properties>
<dependencies>
<dependency>
<groupId>org.tensorflow</groupId>
<artifactId>libtensorflow</artifactId>
<version>1.15.0</version>
</dependency>
<dependency>
<groupId>org.tensorflow</groupId>
<artifactId>libtensorflow_jni_gpu</artifactId>
<version>1.15.0</version>
</dependency>
</dependencies>
</profile>
</profiles>
加载模型
CPU加载1
2
3session = SavedModelBundle
.load(modelConfig.getPath(), modelConfig.getTags())
.session();
GPU加载,指定GPU卡1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17GPUOptions gpuOptions = GPUOptions.newBuilder()
.setVisibleDeviceList(modelConfig.getGpuIds())
.setPerProcessGpuMemoryFraction(0.85f)
.setAllowGrowth(true)
.build();
ConfigProto configProto = ConfigProto.newBuilder()
.setAllowSoftPlacement(true)
.setLogDevicePlacement(true)
.setGpuOptions(gpuOptions)
.build();
session = SavedModelBundle
.loader(modelConfig.getPath())
.withTags(modelConfig.getTags())
.withConfigProto(configProto.toByteArray())
.load()
.session();
模型预测
1 | private List<Double> innerPredict(Map<String, List<List<Integer>>> featureMap, PredictModel model) { |
模型配置1
2
3
4
5
6
7
8
public class ModelConfig {
private String path;
private String tags = "serve";
private String defaultInOpPrefix = "serving_default_";
private String defaultOutOp = "StatefulPartitionedCall";
private String gpuIds="-1";
}
- 注意单输入和多输入的处理
- 注意内存释放
- 注意Op、前缀等默认参数