DETAILED ACTION
Notice of Pre-AIA or AIA Status
1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Preliminary Amendment
2. The preliminary amendment to claims filed on 2/8/2024 is accepted and is examined below.
Claim Objections
3. Claim 22 is objected to because of the following informalities: Claim 22 at line 3 recites at least one client terminal without proper antecedent basis. Claim 21 at line 2 recites at least one client terminal as well. Therefore, claim 22 at line 3 needs to be amended to recite “the at least one client terminal”.
4. Claim 23 is objected to because of the following informalities: Claim 23 at lines 3-4 recites at least one client terminal without proper antecedent basis. This should be amended to recite “the at least one client terminal.” Claim 21 at line 2 recites at least one client terminal as well. Therefore, claim 23 at line 3 needs to be amended to recite “the at least one client terminal”.
5. Appropriate correction is required.
Double Patenting
6. The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
7. Claims 21-32 and 40 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-9 of U.S. Patent No. 11,599,796 B2 (patent 796). Claim 33 is rejected on the ground of nonstatutory double patenting as being unpatentable over claims 10 and 13 of patent 796 in view of Sak et al. (US Patent Application Publication No. 2019/0043508 A1) which at paragraph [0041] discloses “The remote computing system 120 can include one or more computers in one or more locations. In some implementations, the system 120 implements parallel or distributed processing techniques across multiple computers to train the neural network 140 and/or execute other tasks” which corresponds to distributed parallel training of a neural network using two or more servers. Although the claims at issue are not identical, they are not patentably distinct from each other because the present independent claims are similar and broader than those from patent 796.
8. The following table shows correspondence between the claims of present application and patent 796.
Claims of present application
21
22
23
24
25
26
27
28
29
Claims of patent 796
1
1
1
1
3
4
5
6
7
30
31
32
33
40
7
8
9
10 in view of Sak
1
9. The following table shows correspondence between claim 21 of present application and claim 1 of patent 796.
Claim 21 of present application
Claims 1 of patent 796
21. (New) A system for generating a neural network model for image processing by interacting with at least one client terminal, comprising:
1. A system for generating a neural network model for image processing by interacting with at least one client terminal, comprising:
a network configured to facilitate communication of at least one server device in the system and the at least one client terminal,
at least one processor;
and at least one storage device storing a set of instructions,
the at least one processor being in communication with the at least one storage device, wherein when executing the set of instructions, the at least one processor is configured to cause the system to:
wherein the at least one server device includes at least one processor and at least one storage device storing a set of instructions, the at least one processor being in communication with the at least one storage device, wherein when executing the set of instructions, the at least one processor is configured to cause the system to:
obtain a plurality of first training samples;
receive, via the network, a plurality of first training samples from the at least one client terminal,
wherein each of the plurality of first training samples includes a first initial image and a first processed image with respect to the first initial image, the first processed image being generated via processing the first initial image using a third neural network model.
wherein each of the plurality of first training samples includes a first initial image and a first processed image with respect to the first initial image, the first processed image being generated by the at least one client terminal via processing the first initial image using a third neural network model;
and train a first neural network model based on the plurality of first training samples to generate a second neural network model,
train a first neural network model based on the plurality of first training samples to generate a second neural network model;
transmit, via the network, the second neural network model to the at least one client terminal;
and determining the second neural network model as a target neural network model for image processing.
10. Claims 21-33 and 40 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-12 of U.S. Patent No. 11,907,852 B2 (patent 852).
11. The following table shows correspondence between the claims of present application and patent 852.
Claims of present application
21
22
23
24
25
26
27
28
29
30
31
Claims of patent 852
1
2
1
3
4
5
6
7
8
9
10
32
33
40
11
12
1
12. The following table shows correspondence between claim 21 of present application and claim 1 of patent 796.
Claim 21 of present application
Claims 1 of patent 852
21. (New) A system for generating a neural network model for image processing by interacting with at least one client terminal, comprising:
1. A system for generating a neural network model for image processing by interacting with at least one client terminal, comprising:
at least one processor; and at least one storage device storing a set of instructions, the at least one processor being in communication with the at least one storage device,
wherein when executing the set of instructions, the at least one processor is configured to cause the system to:
a network configured to facilitate communication of at least one server device in the system and the at least one client terminal, wherein
the at least one server device each of which includes at least one processor and at least one storage device storing a set of instructions, the at least one processor being in communication with the at least one storage device, wherein when executing the set of instructions, the at least one processor is configured to cause the system to:
obtain a plurality of first training samples;
wherein each of the plurality of first training samples includes a first initial image and a first processed image with respect to the first initial image, the first processed image being generated via processing the first initial image using a third neural network model.
receive, via the network, a plurality of first training samples from at least one first client terminal among the at least one client terminal, wherein each of the plurality of first training samples includes a first initial image and a first processed image with respect to the first initial image, the first processed image being generated by the at least one client terminal via processing the first initial image using a third neural network model;
and train a first neural network model based on the plurality of first training samples to generate a second neural network model,
and train a first neural network model based on the plurality of first training samples to generate a second neural network model.
Claim Rejections - 35 USC § 103
13. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
14. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
15. Claims 21 and 23 are rejected under 35 U.S.C. 103 as being unpatentable over Zhou et al. (US Patent Application Publication No. 2019/0205606 A1) in view of Kumar et al. (US Patent Application Publication No. 2018/0343017 A1) and further in view of Yoo et al. (US Patent Application Publication No. 2016/0125572 A1).
16. Regarding Claim 21, Zhou discloses A system (Abstract reciting “Methods and systems for artificial intelligence based medical image segmentation are disclosed. …”) for generating a neural network model for image processing (paragraph [0056] reciting “In this embodiment of the present invention, a joint learning framework is used to integrate priors to boost the modeling power of deep neural networks for organ segmentation. In an advantageous implementation, distance maps derived from segmentation masks can be used as implicit shape priors, and segmentation DNNs can be learned/trained in conjunction with the distance maps. In addition, the main target segmentation DNN, DNNs from other priors can be introduced for regularization to help improve model performance. …” Segmentation corresponds to image processing.) by interacting with at least one client terminal, (paragraph [0157] reciting “The above described methods for artificial intelligence based medical image segmentation and/or training deep neural networks may be implemented in network-based cloud computing system. In such a network-based cloud computing system, a server communicates with one or more client computers via a network. …”) comprising:
at least one processor; and (paragraph [0036] reciting “… The computer system 100 can be implemented using any type of computer device and includes computer processors, memory units, storage devices, computer software, and other computer components. …”)
at least one storage device storing a set of instructions, the at least one processor being in communication with the at least one storage device, wherein when executing the set of instructions, the at least one processor is configured to cause the system to: (paragraph [0041] reciting “… The master segmentation artificial agent 102 is an intelligent artificial agent that is implemented on one or more computers or processors of computer system 100 by executing computer program instructions (code) loaded into memory. The master segmentation artificial agent 102 observes the medical image to be segmented and autonomously acts to select a segmentation strategy using a segmentation policy learned using machine learning.”)
obtain a plurality of first training samples; and (paragraph [0044] reciting “… A machine learning based mapping is then trained based on the training data (real and synthetic) to select a best segmentation algorithm or combination or segmentation algorithms based on image characteristics of the input images. …” Real and synthetic images correspond to the first training samples.)
train a first neural network model based on the plurality of first training samples (paragraph [0044] reciting “… A machine learning based mapping is then trained based on the training data (real and synthetic) to select a best segmentation algorithm or combination or segmentation algorithms based on image characteristics of the input images. For example, a deep neural network (DNN) can be trained to deep learning techniques, such as deep reinforcement learning, to select one or more segmentation algorithms for a given segmentation task based on characteristics of the medical image to be segmented. …” A DNN is a first neural network model that is trained using the real and synthetic images.) wherein each of the plurality of first training samples includes a first initial image and a first processed image with respect to the first initial image, (paragraph [0044] reciting “… Synthetic training samples can also be generated from the real medical image training samples by converting the real medical image training samples to synthetic images having different imaging characteristics (e.g., noise levels, resolution, etc.). For example, synthetic high-dose and/or low-dose CT images can be generated from normal dose CT images or synthetic images with randomly added image noise can be generated. …”)
While Zhou does not explicitly disclose, Kumar discloses to generate a second neural network model, (paragraph [0068] reciting “… In a second stage, the partially-trained neural network 532 is further trained with a larger subset of the training bits. At the completion of the second training stage, the fully trained neural network 504
becomes available for use.”)
It would have been obvious to a person of ordinary skills in the art before the effective filing date of the claimed invention to modify Zhou with Kumar so that the DNN can be trained to become a fully trained DNN after the initial training of using real and synthetic samples. This is an obviously beneficial modification since a fully trained DNN available for use is the goal of Zhou.
While the combination of Zhou and Kumar does not explicitly disclose, Yoo discloses the first processed image being generated via processing the first initial image using a third neural network model. (paragraph [0009] reciting “In at least some example embodiments, the method may include receiving input images, extracting feature values corresponding to the input images using an image learning model, the image learning model permitting an input and an output to be identical, and generating a synthetic image based on the feature values corresponding to the input images using the image learning model.”;
paragraph [0011] reciting “The generating the synthetic image may include combining the feature values, and generating the synthetic image from the combined feature values using the neural network.“ Therefore, a neural network is used to generate synthetic images based on input real images.)
It would have been obvious to a person of ordinary skills in the art before the effective filing date of the claimed invention to modify Zhou and Kumar with Yoo so that real images are inputted into a separate neural network which is used to generate synthetic images which are then used by Zhou for training the DNN (deep neural network) model. This is an obviously beneficial modification since Zhou explicitly requires real and synthetic image for training the DNN and Yoo provides a method of using a separate neural network to generate such training (real and synthetic) images.
17. Regarding Claim 23, Zhou further discloses The system of claim 21, wherein obtaining a plurality of first training samples includes: receiving the plurality of first training samples from at least one client terminal. (paragraph [0036] reciting “… The computer system 100 communicates with one or more image acquisition device 104, a picture archiving and communication system (PACS) 106, and a segmentation algorithm database 108. …” The acquisition device corresponds to at least one client terminal which then sends the images and processed synthetic images to the server or computer with the training DNN.)
18. Claims 22, 24 and 40 are rejected under 35 U.S.C. 103 as being unpatentable over Zhou in view of Kumar in view of Yoo and further in view of Wang et al. (US Patent 10,713,754 B1).
19. Regarding Claim 22, while the combination of Zhou, Kumar, and Yoo does not explicitly disclose, Wang discloses The system of claim 21, wherein the at least one processor is further configured to cause the system to: transmit the second neural network model to at least one client terminal. (col. 1, lines 15-27 reciting “Neural networks can be trained for different tasks, such as image processing. A trained neural network model can be transmitted to remote client devices for execution. Different client device types (e.g., different operating systems, screen sizes, processors) often require custom-made neural network models that are specifically designed for execution within a given client device environment. Managing multiple versions of a single neural network model, which then must be sent to different client devices over the network when requested, is difficult and often results in a waste of computational resources. Further, sending all client deices all versions is likewise not practical because client devices often have limited memory.”)
It would have been obvious to a person of ordinary skills in the art before the effective filing date of the claimed invention to modify Zhou, Kumar, and Yoo with Wang so that the trained DNN can be transmitted to client devices. This is an obviously beneficial modification as it allows the clients devices in Zhou to perform its own medical image segmentation processing without relying on the server.
20. Regarding Claim 24, Zhou discloses The system of claim 22, wherein the obtaining a plurality of first training samples includes: receiving the plurality of first training samples from at least one first client terminal, and the at least one processor is further configured to cause the system to: (paragraph [0037] reciting “… Multiple image acquisition devices 104 of different medical imaging modalities may communicate with the computer system 100 running the master segmentation artificial agent 102. …”)
Wang further discloses transmit the second neural network model to at least one second client terminal, wherein the at least one first client terminal and the at least one second client terminal are the same or different client terminals. (col. 1, lines 15-27 reciting “Neural networks can be trained for different tasks, such as image processing. A trained neural network model can be transmitted to remote client devices for execution. Different client device types (e.g., different operating systems, screen sizes, processors) often require custom-made neural network models that are specifically designed for execution within a given client device environment. Managing multiple versions of a single neural network model, which then must be sent to different client devices over the network when requested, is difficult and often results in a waste of computational resources. Further, sending all client deices all versions is likewise not practical because client devices often have limited memory.”)
21. Regarding Claim 40, Zhou discloses A system (Abstract reciting “Methods and systems for artificial intelligence based medical image segmentation are disclosed. …”) for generating one or more neural network models for image processing, (paragraph [0056] reciting “In this embodiment of the present invention, a joint learning framework is used to integrate priors to boost the modeling power of deep neural networks for organ segmentation. In an advantageous implementation, distance maps derived from segmentation masks can be used as implicit shape priors, and segmentation DNNs can be learned/trained in conjunction with the distance maps. In addition, the main target segmentation DNN, DNNs from other priors can be introduced for regularization to help improve model performance. …” Segmentation corresponds to image processing.) comprising:
at least one client terminal; (paragraph [0157] reciting “The above described methods for artificial intelligence based medical image segmentation and/or training deep neural networks may be implemented in network-based cloud computing system. In such a network-based cloud computing system, a server communicates with one or more client computers via a network. …”)
at least one server device; (paragraph [0157] reciting “The above described methods for artificial intelligence based medical image segmentation and/or training deep neural networks may be implemented in network-based cloud computing system. In such a network-based cloud computing system, a server communicates with one or more client computers via a network. …”)
a storage device configured to facilitate communication between the at least one client terminal and the at least one server device in the system, each of the at least one service device is configured to: (paragraph [0041] reciting “… The master segmentation artificial agent 102 is an intelligent artificial agent that is implemented on one or more computers or processors of computer system 100 by executing computer program instructions (code) loaded into memory. The master segmentation artificial agent 102 observes the medical image to be segmented and autonomously acts to select a segmentation strategy using a segmentation policy learned using machine learning.”)
train a first neural network model based on a plurality of first training samples (paragraph [0044] reciting “… A machine learning based mapping is then trained based on the training data (real and synthetic) to select a best segmentation algorithm or combination or segmentation algorithms based on image characteristics of the input images. For example, a deep neural network (DNN) can be trained to deep learning techniques, such as deep reinforcement learning, to select one or more segmentation algorithms for a given segmentation task based on characteristics of the medical image to be segmented. …” A DNN is a first neural network model that is trained using the real and synthetic images.) wherein each of the plurality of first training samples includes a first initial image and a first processed image with respect to the first initial image, (paragraph [0044] reciting “… Synthetic training samples can also be generated from the real medical image training samples by converting the real medical image training samples to synthetic images having different imaging characteristics (e.g., noise levels, resolution, etc.). For example, synthetic high-dose and/or low-dose CT images can be generated from normal dose CT images or synthetic images with randomly added image noise can be generated. …”)
While Zhou does not explicitly disclose, Kumar discloses to generate a second neural network, (paragraph [0068] reciting “… In a second stage, the partially-trained neural network 532 is further trained with a larger subset of the training bits. At the completion of the second training stage, the fully trained neural network 504
becomes available for use.”)
It would have been obvious to a person of ordinary skills in the art before the effective filing date of the claimed invention to modify Zhou with Kumar so that the DNN can be trained to become fully trained DNN after the initial training of using real and synthetic samples. This is an obviously beneficial modification since a fully trained DNN available for use is the goal of Zhou.
While the combination of Zhou and Kumar does not explicitly disclose, Yoo discloses the first processed image being generated via processing the first initial image using a third neural network mode; (paragraph [0009] reciting “In at least some example embodiments, the method may include receiving input images, extracting feature values corresponding to the input images using an image learning model, the image learning model permitting an input and an output to be identical, and generating a synthetic image based on the feature values corresponding to the input images using the image learning model.”;
paragraph [0011] reciting “The generating the synthetic image may include combining the feature values, and generating the synthetic image from the combined feature values using the neural network.“ Therefore, a neural network is used to generate synthetic images based on input real images.)
It would have been obvious to a person of ordinary skills in the art before the effective filing date of the claimed invention to modify Zhou and Kumar with Yoo so that real images are inputted into a separate neural network which is used to generate synthetic images which are then used by Zhou for training the DNN (deep neural network) model. This is an obviously beneficial modification since Zhou explicitly requires real and synthetic image for training the DNN and Yoo provides a method of using a separate neural network to generate such training (real and synthetic) images.
While the combination of Zhou, Kumar, and Yoo does not explicitly disclose, Wang discloses and transmit the second neural network to one of the at least client terminal through the storage device. (col. 1, lines 15-27 reciting “Neural networks can be trained for different tasks, such as image processing. A trained neural network model can be transmitted to remote client devices for execution. Different client device types (e.g., different operating systems, screen sizes, processors) often require custom-made neural network models that are specifically designed for execution within a given client device environment. Managing multiple versions of a single neural network model, which then must be sent to different client devices over the network when requested, is difficult and often results in a waste of computational resources. Further, sending all client deices all versions is likewise not practical because client devices often have limited memory.”)
It would have been obvious to a person of ordinary skills in the art before the effective filing date of the claimed invention to modify Zhou, Kumar, and Yoo with Wang so that the trained DNN can be transmitted to client devices. This is an obviously beneficial modification as it allows the clients devices in Zhou to perform its own medical image segmentation processing without relying on the server.
22. Claims 33-38 are rejected under 35 U.S.C. 103 as being unpatentable over Zhou in view of Kumar in view of Wang and further in view of Sak et al. (US Patent Application Publication No. 2019/0043508 A1).
23. Regarding Claim 33, Zhou discloses A system (Abstract reciting “Methods and systems for artificial intelligence based medical image segmentation are disclosed. …”) for generating one or more neural network models for image processing, (paragraph [0056] reciting “In this embodiment of the present invention, a joint learning framework is used to integrate priors to boost the modeling power of deep neural networks for organ segmentation. In an advantageous implementation, distance maps derived from segmentation masks can be used as implicit shape priors, and segmentation DNNs can be learned/trained in conjunction with the distance maps. In addition, the main target segmentation DNN, DNNs from other priors can be introduced for regularization to help improve model performance. …” Segmentation corresponds to image processing.) comprising:
at least one client terminal; (paragraph [0157] reciting “The above described methods for artificial intelligence based medical image segmentation and/or training deep neural networks may be implemented in network-based cloud computing system. In such a network-based cloud computing system, a server communicates with one or more client computers via a network. …”)
a network configured to facilitate communication between the at least one client terminal and the at least two server devices in the system, (paragraph [0036] reciting “… In another possible embodiment, the computer system 100 running the master segmentation artificial agent 102 can be implemented on a remote cloud-based computer system using one or more networked computer devices on the cloud-based computer system. In this case, medical images of patients can be transmitted to a server of the cloud-based computer system, the master segmentation artificial agent 102 can be run as part of a cloud-based service to perform medical image registration, and the segmentation results can then be returned to a local computer device.”)
train a first neural network model for image processing based on a plurality of first training (paragraph [0044] reciting “… A machine learning based mapping is then trained based on the training data (real and synthetic) to select a best segmentation algorithm or combination or segmentation algorithms based on image characteristics of the input images. For example, a deep neural network (DNN) can be trained to deep learning techniques, such as deep reinforcement learning, to select one or more segmentation algorithms for a given segmentation task based on characteristics of the medical image to be segmented. …” A DNN is a first neural network model that is trained using the real and synthetic images.)
While Zhou does not explicitly disclose, Kumar discloses to generate a second neural network, (paragraph [0068] reciting “… In a second stage, the partially-trained neural network 532 is further trained with a larger subset of the training bits. At the completion of the second training stage, the fully trained neural network 504
becomes available for use.”)
It would have been obvious to a person of ordinary skills in the art before the effective filing date of the claimed invention to modify Zhou with Kumar so that the DNN can be trained to become fully trained DNN after the initial training of using real and synthetic samples. This is an obviously beneficial modification since a fully trained DNN available for use is the goal of Zhou.
While the combination of Zhou and Kumar does not explicitly disclose, Wang discloses and transmit the second neural network to one of the at least one client terminal. (col. 1, lines 15-27 reciting “Neural networks can be trained for different tasks, such as image processing. A trained neural network model can be transmitted to remote client devices for execution. Different client device types (e.g., different operating systems, screen sizes, processors) often require custom-made neural network models that are specifically designed for execution within a given client device environment. Managing multiple versions of a single neural network model, which then must be sent to different client devices over the network when requested, is difficult and often results in a waste of computational resources. Further, sending all client deices all versions is likewise not practical because client devices often have limited memory.”)
It would have been obvious to a person of ordinary skills in the art before the effective filing date of the claimed invention to modify Zhou and Kumar with Wang so that the trained DNN can be transmitted to client devices. This is an obviously beneficial modification as it allows the clients devices in Zhou to perform its own medical image segmentation processing without relying on the server.
While the combination of Zhou, Kumar, and Wang does not explicitly disclose, Sak discloses at least two server devices; (paragraph [0041] reciting “The remote computing system 120 can include one or more computers in one or more locations. In some implementations, the system 120 implements parallel or distributed processing techniques across multiple computers to train the neural network 140 and/or execute other tasks.”;
paragraph [0085] reciting “The computing device 700 may be implemented in a number of different forms, as shown in the figure. For example, it may be implemented as a standard server 720, or multiple times in a group of such servers. …”)
and the at least two server devices in the system (paragraph [0041] reciting “The remote computing system 120 can include one or more computers in one or more locations. In some implementations, the system 120 implements parallel or distributed processing techniques across multiple computers to train the neural network 140 and/or execute other tasks.”;
paragraph [0085] reciting “The computing device 700 may be implemented in a number of different forms, as shown in the figure. For example, it may be implemented as a standard server 720, or multiple times in a group of such servers. …”)
wherein the at least two server devices are connected to the network through a distributed connection, and each of the at least two service devices is configured to: (paragraph [0041] reciting “The remote computing system 120 can include one or more computers in one or more locations. In some implementations, the system 120 implements parallel or distributed processing techniques across multiple computers to train the neural network 140 and/or execute other tasks.”;
paragraph [0085] reciting “The computing device 700 may be implemented in a number of different forms, as shown in the figure. For example, it may be implemented as a standard server 720, or multiple times in a group of such servers. …”)
It would have been obvious to a person of ordinary skills in the art before the effective filing date of the claimed invention to modify Zhou, Kumar, and Wang with Sak so multiple servers can be used to train the DNN in Zhou in a distributed manner. This is a beneficial modification since it allows the processing to be shared among the servers and lessen the load of training the DNN on a particular single server.
24. Regarding Claim 34, Zhou further discloses The system of claim 33, wherein one of the at least two server devices is configured to generate a single one type of the second neural network model for image processing. (paragraph [0044] reciting “… A machine learning based mapping is then trained based on the training data (real and synthetic) to select a best segmentation algorithm or combination or segmentation algorithms based on image characteristics of the input images. For example, a deep neural network (DNN) can be trained to deep learning techniques, such as deep reinforcement learning, to select one or more segmentation algorithms for a given segmentation task based on characteristics of the medical image to be segmented. …” One segmentation algorithm corresponds to a single type of neural network model.)
25. Regarding Claim 35, Zhou further discloses The system of claim 33, wherein one of the at least two server devices is configured to generate more than one type of second neural network models for image processing. (paragraph [0044] reciting “… A machine learning based mapping is then trained based on the training data (real and synthetic) to select a best segmentation algorithm or combination or segmentation algorithms based on image characteristics of the input images. For example, a deep neural network (DNN) can be trained to deep learning techniques, such as deep reinforcement learning, to select one or more segmentation algorithms for a given segmentation task based on characteristics of the medical image to be segmented. …” The more than one segmentation algorithm corresponds to more multiple types of neural network model.)
26. Regarding Claim 36, The system of claim 33, wherein two of the at least two server devices are configured to generate different types of second neural network models for image processing. (paragraph [0044] reciting “… A machine learning based mapping is then trained based on the training data (real and synthetic) to select a best segmentation algorithm or combination or segmentation algorithms based on image characteristics of the input images. For example, a deep neural network (DNN) can be trained to deep learning techniques, such as deep reinforcement learning, to select one or more segmentation algorithms for a given segmentation task based on characteristics of the medical image to be segmented. …” Multiple segmentation algorithms correspond to different types neural network model for image processing.)
27. Regarding Claim 37, Zhou further discloses The system of claim 33, wherein a type of the second neural network model for image processing includes a type for image reconstruction, a type for image segmentation, a type for image denoising, a type for image enhancement, a type for image super-resolution processing, or a type for image artifact removing. (paragraph [0045] reciting “… Typically, medical image segmentation algorithms are designed and optimized with a specific context of use. For example, algorithms designed for segmenting tubular structures generally perform well in arteries and veins, while algorithms designed for “blob” like structures are well suited for organs such as the heart, brain, liver, etc. The master segmentation artificial agent 102 can automatically identify the context of use (e.g., the target anatomical structure to be segmented) and automatically switch between different segmentation algorithms for different target anatomical structures.”)
28. Regarding Claim 38, Zhou further discloses The system of claim 33, wherein the at least two service devices are configured to train the first neural network model for image processing to generate different types of the second neural network models for image processing (paragraph [0044] reciting “… A machine learning based mapping is then trained based on the training data (real and synthetic) to select a best segmentation algorithm or combination or segmentation algorithms based on image characteristics of the input images. For example, a deep neural network (DNN) can be trained to deep learning techniques, such as deep reinforcement learning, to select one or more segmentation algorithms for a given segmentation task based on characteristics of the medical image to be segmented. …” Multiple segmentation algorithms correspond to different types neural network model for image processing.)
Wang further discloses and transmit the different types of the second neural network models for image processing to the one same client terminal. (col. 1, lines 15-27 reciting “Neural networks can be trained for different tasks, such as image processing. A trained neural network model can be transmitted to remote client devices for execution. Different client device types (e.g., different operating systems, screen sizes, processors) often require custom-made neural network models that are specifically designed for execution within a given client device environment. Managing multiple versions of a single neural network model, which then must be sent to different client devices over the network when requested, is difficult and often results in a waste of computational resources. Further, sending all client deices all versions is likewise not practical because client devices often have limited memory.”)
Allowable Subject Matter
29. Claims 25-32 and 39 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
30. The following is a statement of reasons for the indication of allowable subject matter: Claim 25 recites the limitation wherein the at least one processor is further configured to cause the system to: receive a first test result of the second neural network model from the at least one client terminal; and determine the second neural network model as a target neural network model for image processing in response to a determination that the first test result satisfies a first condition which is neither disclosed nor suggested by the cited references, either singly or in combination.
31. Claims 26-32 depend from claim 25.
32. Claim 39 recites the limitation wherein the at least two service devices are configured to train the first neural network model for image processing to generate different types of the second neural network models for image processing and transmit the different types of the second neural network models for image processing to different client terminals which is neither disclosed nor suggested by the cited references, either singly or in combination.
CONTACT
Any inquiry concerning this communication or earlier communications from the examiner should be directed to FRANK S CHEN whose telephone number is (571)270-7993. The examiner can normally be reached Mon - Fri 8-11:30 and 1:30-6.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kee Tung can be reached at 5712727794. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/FRANK S CHEN/Primary Examiner, Art Unit 2611