Luan Rios Campos, P. Nogueira, E. G. S. Nascimento
{"title":"速度模型估计的全卷积网络调优","authors":"Luan Rios Campos, P. Nogueira, E. G. S. Nascimento","doi":"10.4043/29904-ms","DOIUrl":null,"url":null,"abstract":"\n Different parameters of a fully convolutional network (FCN) are experimented to evaluate which combination predicts sound velocity models from a single configuration of seismic modeling. The evaluation is made considering some fixed parameters of the deep learning model, such as number of epochs, batch size and loss function, but with variations of the optimizer and activation function. The considered optimizers were RMSprop, Adam and Adamax, whilst the activations functions were the Rectified Linear Unit (ReLU), Leaky ReLU, Exponential Linear Unit (ELU) and Parametric ReLU (PReLU). Five metrics were used to evaluate the model during the testing stage: R2, Pearson's r, factor of two, mean absolute error and mean squared error. To the extent of these experiments, it was found that the optimizers have much more influence than the activation functions when determining the resolution of the output model. The best combination was the one using the PReLU activation function with the Adamax optimizer.","PeriodicalId":415055,"journal":{"name":"Day 1 Tue, October 29, 2019","volume":"190 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Tuning a Fully Convolutional Network for Velocity Model Estimation\",\"authors\":\"Luan Rios Campos, P. Nogueira, E. G. S. Nascimento\",\"doi\":\"10.4043/29904-ms\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"\\n Different parameters of a fully convolutional network (FCN) are experimented to evaluate which combination predicts sound velocity models from a single configuration of seismic modeling. The evaluation is made considering some fixed parameters of the deep learning model, such as number of epochs, batch size and loss function, but with variations of the optimizer and activation function. The considered optimizers were RMSprop, Adam and Adamax, whilst the activations functions were the Rectified Linear Unit (ReLU), Leaky ReLU, Exponential Linear Unit (ELU) and Parametric ReLU (PReLU). Five metrics were used to evaluate the model during the testing stage: R2, Pearson's r, factor of two, mean absolute error and mean squared error. To the extent of these experiments, it was found that the optimizers have much more influence than the activation functions when determining the resolution of the output model. The best combination was the one using the PReLU activation function with the Adamax optimizer.\",\"PeriodicalId\":415055,\"journal\":{\"name\":\"Day 1 Tue, October 29, 2019\",\"volume\":\"190 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-10-28\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Day 1 Tue, October 29, 2019\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.4043/29904-ms\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Day 1 Tue, October 29, 2019","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.4043/29904-ms","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Tuning a Fully Convolutional Network for Velocity Model Estimation
Different parameters of a fully convolutional network (FCN) are experimented to evaluate which combination predicts sound velocity models from a single configuration of seismic modeling. The evaluation is made considering some fixed parameters of the deep learning model, such as number of epochs, batch size and loss function, but with variations of the optimizer and activation function. The considered optimizers were RMSprop, Adam and Adamax, whilst the activations functions were the Rectified Linear Unit (ReLU), Leaky ReLU, Exponential Linear Unit (ELU) and Parametric ReLU (PReLU). Five metrics were used to evaluate the model during the testing stage: R2, Pearson's r, factor of two, mean absolute error and mean squared error. To the extent of these experiments, it was found that the optimizers have much more influence than the activation functions when determining the resolution of the output model. The best combination was the one using the PReLU activation function with the Adamax optimizer.