ååã®èšäº ãæžãã段éã§ããä»ã«ãããããæ°ã¥ãããã£ããäŸãã°ããŒãããäœãDeep Learning âPythonã§åŠã¶ãã£ãŒãã©ãŒãã³ã°ã®çè«ãšå®è£ ãïŒä»¥äžãããã¹ããïŒ4.4 ã®åŸé ãšããã®ã¯ããã¯ãã«è§£æã§ããå€æ¬¡å ã®åŸé ãšåãããšã ã£ãã®ã ïŒÂ
ãªãã ãæžããŠã¿ããšããããŸãã®ããšã ãªãããã¹ãP103ã§ãåŸé ããšããèšèãæåã«åºãŠãããšãã«ã¯ã2å€æ°ã®é¢æ°ã«å¯ŸããåŸé ãæ°å€åŸ®åã«ãã£ãŠæ±ãã次ã®æ®µéã§P109ã§å¯Ÿè±¡ããã¥ãŒã©ã«ãããã¯ãŒã¯ã«æ¡åŒµããããããæ°ã¥ãã®ãé ããããã¥ãŒã©ã«ãããã¯ãŒã¯ãšããŠã¯ããç°¡åãªäŸãšã¯èšãããããªã6å€æ°ïŒ6次å ïŒãªã®ã ããããèªåã§3å€æ°ïŒ3次å ïŒã«åŒãäžããŠãé ãŸããªããããã£ããããããšãªã£ãããã¯ãã«è§£æã®ææ¥ã§ã6次å ã®äŸé¡ãªããŠããããæ±ããªãããïŒÂ
ããã«èšããšãæ°åŠïŒããçãã¯å¹ŸäœåŠïŒã§ããã次å ããšãããã°ã©ãã³ã°ïŒããçãã¯é åïŒã§ããã次å ãã«ã¯ãæå³ã®ãºã¬ããããããã¹ãP109ã®ãã¥ãŒã©ã«ãããã¯ãŒã¯ã¯6å€æ°ãªã®ã§æ°åŠçã«ã¯6次å ã ããããã°ã©ãã³ã°çã«ã¯2次å 6èŠçŽ ïŒ3Ã2èŠçŽ ïŒã®é åãªã®ã ã
ãããèšèªåã§ããŠããŸãã°äœãšèšãããšã¯ãªããããããã¬ã¯ã«ã³ãã€ã«ã¯ã«ã³ãã€ãç¶æ ã®ãšãã«ã¯ããã ãã¶ãèºãã®ç³ã®äžã€ãšãªãã
ã¹ãã³ãµãŒãªã³ã¯
Â
Â
ããã¯ãšãããã4ã»ã°ã¡ã³ãLEDã§ããã4ã»ã°ã¡ã³ãLEDã«é¢ããŠã¯ 5æ21æ¥ä»æèšäºÂ åç §ãç§ãã§ã£ã¡äžãããã®ã§çŸå®ã«ãããªãã®ã¯ãªãããããèªèãããã«ã¯2å±€ãã¥ãŒã©ã«ãããã¯ãŒã¯ãå¿ èŠãšãªãã®ããã¢ã§ããïŒååã®ããŒã»ãããã³ã¯1å±€ïŒã
ããã§ä»åºŠã¯ãããã¹ãP114ïœ115ã«ã³ãŒãããã2å±€ãã¥ãŒã©ã«ãããã¯ãŒã¯ã®ã¯ã©ã¹ âTwoLayerNetâ ãæµçšããããšã«ããã
ãšã¯èšããã®ã®ãç§ã¯ã¯ã©ã¹ãšããæŠå¿µã«ãªããã§ãããšã¯èšããªããPython ãšããèšèªã«ãããŠã¯ããªãã®ããšã§ãããã ãããŸãã¯ãããŒã¿ãšé¢æ°ããã©ãã©ã«æµçšããããšã«ããã
ä»å瀺ãã³ãŒããã Anaconda Prompt ã«è²Œãä»ããŠå®è¡å¯èœãªã¯ãã§ããã
import sys, os
sys.path.append(os.pardir)
from common.functions import *
from common.gradient import numerical_gradient
x = np.array([[1, 1, 1, 1],[0, 0, 1, 0],[1, 0, 0, 0]])
t = np.array([[1, 0], [0, 1], [0, 1]])
weight_init_std=0.01
W1 = weight_init_std * np.random.randn(4, 3)
W2 = weight_init_std * np.random.randn(3, 2)
B1, B2 = np.zeros(3), np.zeros(2)Â
ã€ã³ããŒãã¯Â âTwoLayerNetâ ãšåãã«ããå ¥åããŒã¿ âxâ ãšæåž«ããŒã¿ âtâ 㯠numpyé åã§äžããŠããŸã£ãã1å±€ãš2å±€ã®éã¿ãšãã€ã¢ã¹ã¯ããµã€ãºã ãæå®ã㊠âTwoLayerNetâ ãšåãæ¹æ³ã§äžããã
ãªãã§1å±€ãš2å±€ã®ãµã€ãºããããªããã«ã€ããŠã¯ã5æ21æ¥ä»ã®æèšäºããå³ã®ã¿åŒçšããŠåæ²ãããâyâ ã¯åºåããŒã¿ã§ããã
æšå®ãè¡ãé¢æ°ã¯ãã¯ã©ã¹âTwoLayerNetâ ã® âpredictâ ã¡ãœãããšåãã«ãããã€ããã®ãŸãŸã³ããããã
def predict(x):
  A1 = np.dot(x,W1) + B1
  Z1 = sigmoid(A1)
  A2 = np.dot(Z1,W2) + B2
  y = softmax(A2)
  return y
æ倱é¢æ°ããã¯ã©ã¹âTwoLayerNetâ ã® âlossâ ã¡ãœããããã®ãŸãŸæµçšãããååã®èªäœã¯ã©ã¹Â "Perceptrn" ã§ã¯æ倱é¢æ°ã«äºä¹å誀差ã䜿çšãããã亀差ãšã³ããããŒèª€å·®äœ¿çšã«å€æŽããã®ã¯ç¹ã«æå³ããã£ãŠã®ããšã§ã¯ãªãã
def loss(x, t):
  y = predict(x)
  return cross_entropy_error(y, t)Â
èªè粟床ãæ±ããã¡ãœãã âaccuracyâ ã¯ãä»åã¯äœ¿ããªãã®ã§æµçšããªãã£ãã
ãããŠãã©ã¡ãŒã¿ã®æŽæ°ã¯ãã¡ãœãã ânumerical_gradientâ ã䜿ã代ããã«ãä»åãç¹°ãè¿ãã³ããã«ãã£ãŠå®è¡ããã
ãã®æºåãšããŠãå ã«å®çŸ©ããæ倱é¢æ° âlossâ ãã©ã ãåŒã«ãã£ãŠèªã¿èŸŒããå®ã¯ Python ã®ã©ã ãåŒãšããã®ãããŸã ããããã£ãŠããªãã®ã¯å ç·ãåŠç¿ç âleaning_rateâ ã¯0.1ã§ã¯ãªã1.0ãšããŠã¿ããã³ããåæ°ãæžãããããªãšããã»ã©ã®æå³ã§ããã
loss_W = lambda W: loss(x, t)
learning_rate = 1.0
ãããŠä»¥äžãç¹°ãè¿ã Anaconda Prompt ã«è²Œãä»ããšãªããpredict(x) ã¯æšå®å€ã®çããŒã¿ã®è¡šç€ºãargmaxïŒããã¹ãP80ïŒãåãŸããã®ã¯ãååã®ã¹ãããé¢æ°ãšåæ§ãã¡ãã£ãšã¯çµæãèŠããããªããªãããªãšãã工倫ã®ã€ããã§ããã
é¢æ° ânumerical_gradientâ ã¯Â common\gradient.py ããã®ã€ã³ããŒãã ãããã«åŠç¿çãæãããã®ããéã¿ãšãã€ã¢ã¹ã®ä¿®æ£å€ãšãªãã
predict(x)
np.argmax(predict(x), axis =1)
loss(x, t)
W1 -= learning_rate*numerical_gradient(loss_W, W1)
B1 -= learning_rate*numerical_gradient(loss_W, B1)
W2 -= learning_rate*numerical_gradient(loss_W, W2)
B2 -= learning_rate*numerical_gradient(loss_W, B2)
貌ãä»ãäžåç®ãäžãã3è¡ã âarray([0, 1, 1]âŠâ ãšãªãã°æ£è§£ã ã
äºåç®ãæ倱 âlossâ ã®å€ãæžå°ããŠããã
26åç®ãæ£è§£ãåºãçŽåã
27åç®ã«ããŠæ£è§£ãåºãããã«ãªã£ãããã€ãã ããã30å貌ãä»ãããŸã§ã«ã¯æ£è§£ãåºãããã ã
ãã®æ®µéã§ã1å±€ç®ã®éã¿ âW1â ãšãã€ã¢ã¹ âB1âã2å±€ç®ã®éã¿ âW2â ãšãã€ã¢ã¹ "B2" ããã³ããããšãèå³æ·±ãå€ã衚瀺ãããã
ã ããããããªæãããã ãäžã«è²Œãä»ãããšããšã¯å¥ã®è©Šè¡ã§ããã
åæã確å®ã«ããããã念ã®ãããã20åã»ã©ãåèš50åã»ã©è²Œãä»ããç¹°ãè¿ããããããå¥ã®è©Šè¡ã ããåæå€ãéãããã®ããŒã¿ãæ¡ã£ãšãã°ããã£ãããã®ããŒã¿ãæ¡ã£ãšãã°ããã£ããšããã®ã¯ãã ãããããšããæ°ã¥ãã®ã§ã
ãã®ã¯è©Šããšããããšã§ããã®çµæããã³ãããå€ãåºã«ã5æ21æ¥ã®ã人ã®èããã¢ã«ãŽãªãºã ãã®éã¿ "W1"ã"W2" ãšãã€ã¢ã¹ "B1"ã"B2" ãã次ã®ããã«å€æŽããŠã¿ãã
common\functions.py ãããã¹ãŠã®é¢æ°ãã€ã³ããŒãããŠããã®ã§ãã¹ãããé¢æ° âstep_functionâïŒããã¹ãP45ïŒã䜿ããã®ã ã
x = np.array([[1, 1, 1, 1],[0, 0, 1, 0],[1, 0, 0, 0]])
W1 = np.array([[-0.31, -0.29, -0.30], [-1.11, -1.23, -1.20],
[-0.29, -0.28, -0.27], [-0.73, -0.77, -0.70]])
B1 = np.array([0.54, 0.70, 0.68])
A1 = np.dot(x,W1) + B1
Z1 = step_function(A1)
W2 = np.array([[-1.31, 1.31],[-1.51, 1.51], [-1.47, 1.48]])
B2 = np.array([1.15, -1.15])
A2 = np.dot(Z1,W2) + B2
Z2 = step_function(A2)
äžèšã³ãŒãã Anaconda  Prompt ã«è²Œãä»ã㊠Z1ãZ2 ããã³ããããšã次ã®ããã«ãªã£ãïŒ
5æ21æ¥æèšäºã«ç€ºããä¿æ°ã䜿çšãããšãã«ã¯ãZ2 ããªãã¡ç¬¬2å±€ã®åºåã¯åãçµæã ã£ãããZ1 ããªãã¡ç¬¬1å±€ã®åºåã¯åäœè¡åã¿ããã«ãªã£ãã®ã ã£ãã
ç»åã®ã¿åæ²ã
ããã¯ããã§ãèå³æ·±ãç 究察象ãäžããŠãããããã«æãã5æ21æ¥ã§æ±ºããä¿æ°ã¯ã4ã»ã°ã¡ã³ãLEDã®è¡šç€ºã®ãã¡ â0âãâ1â ã«å¯Ÿå¿ãã以å€ã®çµã¿åããã«ã€ããŠã¯ããã¹ãŠãå€ãªãããè¿ããŠããããã ãä»åã®ä¿æ°ã¯ããããã®çµã¿åããã«å¯ŸããŠã¯åŠç¿ãè¡ã£ãŠããªãã®ã§ããããã誀ã£ãå€ãè¿ããŠããã§ãããããšã¯ãããã«æ³åãã€ããããå€ãªãããåŠç¿ãããã«ã¯ãã©ãããããããã ãããã¡ãã£ãšç¥æµãçµããªããã°ãªããªãã
ã ããã®åã«ãä»åã®ã¹ã¯ãªãããäœã£ãŠããŠãéèŠãªããšã«æ°ãã€ããã®ã§ããã£ã¡ãå ã«è©Šããªãããšæã£ãã
ä»åã®ã¹ã¯ãªããã¯ãçµå±ããã¹ãP114ïœ115ã®2å±€ãã¥ãŒã©ã«ãããã¯ãŒã¯ã®ã¯ã©ã¹ âTwoLayerNetâ ã®æµçšã§äºè¶³ããŠããŸã£ãããšããããšã¯ããã£ããã®ã¯ã©ã¹ããããã厩ããªããŠããã¯ã©ã¹ã«é©åãªåŒæ°ãäžããã°å®è¡ã§ããã®ã§ã¯ãªãããšæãã€ããã®ã ã
ãã£ãŠã¿ãããæ¬åœã«å®è¡ã§ããïŒ ãã®ããšã¯ä»åã®ã·ãªãŒãºã®ããã°ã¿ã€ãã«ã«æ²ãããã®ïŒãã®åé ã§ãçµè«ããå ã«è¿°ã¹ãéããã ãä»åãé·ããªã£ãã®ã§ãããã§ãŸãäžæŠãšã³ããªãŒãåºåããå 容ã¯æ¬¡åã§è©³è¿°ããããšã«ããã
ãã®é ç¶ãã
ãŒãããäœãDeep Learning âPythonã§åŠã¶ãã£ãŒãã©ãŒãã³ã°ã®çè«ãšå®è£
- äœè : æè€åº·æ¯
- åºç瀟/ã¡ãŒã«ãŒ: ãªã©ã€ãªãŒãžã£ãã³
- çºå£²æ¥: 2016/09/24
- ã¡ãã£ã¢: åè¡æ¬ïŒãœããã«ããŒïŒ
- ãã®ååãå«ãããã° (16件) ãèŠã