DETAILED ACTION
Claims 1-20 rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Application Claims 1-20 rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-20 of U.S. Patent No. 11,941,813.
Claim(s) 1,3 is/are rejected under 35 U.S.C. 103 as being unpatentable over Zhou et al. (US 2022/0262105 A1) with Related U.S. Application Data Provisional application No. 62/876,502, filed on Jul. 19, 2019 in view of Zhang et al. (CN 103942770 A) with SEARCH machine translation and ZNAMENSKIY et al. (US 2019/0347524 A1):
Claim(s) 2 is/are rejected under 35 U.S.C. 103 as being unpatentable over Zhou et al. (US 2022/0262105 A1) with Related U.S. Application Data Provisional application No. 62/876,502, filed on Jul. 19, 2019 in view of Zhang et al. (CN 103942770 A) with SEARCH machine translation and ZNAMENSKIY et al. (US 2019/0347524 A1) as applied in claims 1,3 further in view of Kirby et al. (US 2020/0357516 A1):
Claim(s) 4 is/are rejected under 35 U.S.C. 103 as being unpatentable over Zhou et al. (US 2022/0262105 A1) with Related U.S. Application Data Provisional application No. 62/876,502, filed on Jul. 19, 2019 in view of Zhang et al. (CN 103942770 A) with SEARCH machine translation and ZNAMENSKIY et al. (US 2019/0347524 A1) as applied in claims 1,3 further in view of HU et al. (CN 109191476 A) with SEARCH machine translation:
Claim(s) 5 is/are rejected under 35 U.S.C. 103 as being unpatentable over Zhou et al. (US 2022/0262105 A1) with Related U.S. Application Data Provisional application No. 62/876,502, filed on Jul. 19, 2019 in view of Zhang et al. (CN 103942770 A) with SEARCH machine translation and ZNAMENSKIY et al. (US 2019/0347524 A1) as applied in claims 1,3 further in view of JAGANATHAN et al. (US 2023/0004749 A1) with Related U.S. Application Data: provisional application No. 62/821,766, filed on Mar. 21, 2019 further in view of PRASAD et al. (US 2013/0156305 A1):
Claim(s) 6 is/are rejected under 35 U.S.C. 103 as being unpatentable over Zhou et al. (US 2022/0262105 A1) with Related U.S. Application Data Provisional application No. 62/876,502, filed on Jul. 19, 2019 in view of Zhang et al. (CN 103942770 A) with SEARCH machine translation and ZNAMENSKIY et al. (US 2019/0347524 A1) as applied in claims 1,3 further in view of JAGANATHAN et al. (US 2023/0004749 A1) with Related U.S. Application Data: provisional application No. 62/821,766, filed on Mar. 21, 2019 as applied in claim 5 further in view of Loskutoff et al. (US 4,791,068):
Claim(s) 7 is/are rejected under 35 U.S.C. 103 as being unpatentable over Zhou et al. (US 2022/0262105 A1) with Related U.S. Application Data Provisional application No. 62/876,502, filed on Jul. 19, 2019 in view of Zhang et al. (CN 103942770 A) with SEARCH machine translation and ZNAMENSKIY et al. (US 2019/0347524 A1) as applied in claims 1,3 further in view of JAGANATHAN et al. (US 2023/0004749 A1) with Related U.S. Application Data: provisional application No. 62/821,766, filed on Mar. 21, 2019 as applied in claim 5 further in view of Zemenchik (US 2020/0107490 A1):
Claim(s) 8 is/are rejected under 35 U.S.C. 103 as being unpatentable over Zhou et al. (US 2022/0262105 A1) with Related U.S. Application Data Provisional application No. 62/876,502, filed on Jul. 19, 2019 in view of Zhang et al. (CN 103942770 A) with SEARCH machine translation and ZNAMENSKIY et al. (US 2019/0347524 A1) as applied in claims 1,3 further in view of JAGANATHAN et al. (US 2023/0004749 A1) with Related U.S. Application Data: provisional application No. 62/821,766, filed on Mar. 21, 2019 as applied in claim 5 further in view of Yuan (US 2017/0365053 A1):
Claim(s) 9 is/are rejected under 35 U.S.C. 103 as being unpatentable over Zhou et al. (US 2022/0262105 A1) with Related U.S. Application Data Provisional application No. 62/876,502, filed on Jul. 19, 2019 in view of Zhang et al. (CN 103942770 A) with SEARCH machine translation and ZNAMENSKIY et al. (US 2019/0347524 A1) as applied in claims 1,3 further in view of Funka-Lea et al. (US 2019/0261945 A1):
Claim(s) 10,11,17,19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Zhou et al. (US 2022/0262105 A1) with Related U.S. Application Data Provisional application No. 62/876,502, filed on Jul. 19, 2019 in view of Zhang et al. (CN 103942770 A) with SEARCH machine translation and ZNAMENSKIY et al. (US 2019/0347524 A1) as applied in claims 1,3 further in view of Funka-Lea et al. (US 2019/0261945 A1) as applied in claim 9 further in view of Giner et al. (US 2021/0279880 A1):
Claim(s) 12 is/are rejected under 35 U.S.C. 103 as being unpatentable over Zhou et al. (US 2022/0262105 A1) with Related U.S. Application Data Provisional application No. 62/876,502, filed on Jul. 19, 2019 in view of Zhang et al. (CN 103942770 A) with SEARCH machine translation and ZNAMENSKIY et al. (US 2019/0347524 A1) as applied in claims 1,3 further in view of Funka-Lea et al. (US 2019/0261945 A1) as applied in claim 9 further in view of Giner et al. (US 2021/0279880 A1) as applied in claims 10,11,17,19 further in view of Kerr et al. (US 2004/0148197 A1):
Claim(s) 13 is/are rejected under 35 U.S.C. 103 as being unpatentable over Zhou et al. (US 2022/0262105 A1) with Related U.S. Application Data Provisional application No. 62/876,502, filed on Jul. 19, 2019 in view of Zhang et al. (CN 103942770 A) with SEARCH machine translation and ZNAMENSKIY et al. (US 2019/0347524 A1) as applied in claims 1,3 further in view of Funka-Lea et al. (US 2019/0261945 A1) as applied in claim 9 further in view of Giner et al. (US 2021/0279880 A1) as applied in claims 10,11,17,19 further in view of Roth et al. (US 11,816,185 B1):
Claim(s) 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Zhou et al. (US 2022/0262105 A1) with Related U.S. Application Data Provisional application No. 62/876,502, filed on Jul. 19, 2019 in view of Zhang et al. (CN 103942770 A) with SEARCH machine translation and ZNAMENSKIY et al. (US 2019/0347524 A1) as applied in claims 1,3 further in view of Funka-Lea et al. (US 2019/0261945 A1) as applied in claim 9 further in view of Giner et al. (US 2021/0279880 A1) as applied in claims 10,11,17,19 further in view of Vaconcelos et al. (US 2021/0012769 A1):
Claim(s) 15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Zhou et al. (US 2022/0262105 A1) with Related U.S. Application Data Provisional application No. 62/876,502, filed on Jul. 19, 2019 in view of Zhang et al. (CN 103942770 A) with SEARCH machine translation and ZNAMENSKIY et al. (US 2019/0347524 A1) as applied in claims 1,3 further in view of Funka-Lea et al. (US 2019/0261945 A1) as applied in claim 9 further in view of Giner et al. (US 2021/0279880 A1) as applied in claims 10,11,17,19 further in view of Zhang et al. (US 2019/0147250 A1), herein referred to as Zhang II:
Claim(s) 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Zhou et al. (US 2022/0262105 A1) with Related U.S. Application Data Provisional application No. 62/876,502, filed on Jul. 19, 2019 in view of Zhang et al. (CN 103942770 A) with SEARCH machine translation and ZNAMENSKIY et al. (US 2019/0347524 A1) as applied in claims 1,3 further in view of Funka-Lea et al. (US 2019/0261945 A1) as applied in claim 9 further in view of Giner et al. (US 2021/0279880 A1) as applied in claims 10,11,17,19 further in view of Moustafa et al. (US 2022/0126864 A1) with Related U.S. Application Data: Provisional application No. 62/826,955, filed on Mar. 29, 2019:
Claim(s) 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Zhou et al. (US 2022/0262105 A1) with Related U.S. Application Data Provisional application No. 62/876,502, filed on Jul. 19, 2019 in view of Zhang et al. (CN 103942770 A) with SEARCH machine translation and ZNAMENSKIY et al. (US 2019/0347524 A1) as applied in claims 1,3 further in view of Funka-Lea et al. (US 2019/0261945 A1) as applied in claim 9 further in view of Giner et al. (US 2021/0279880 A1) as applied in claims 10,11,17,19 further in view of Zhang et al. (WO 2020/007277 A1), herein referred to as Zhang III, with SEARCH machine translation:
Claim(s) 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Zhou et al. (US 2022/0262105 A1) with Related U.S. Application Data Provisional application No. 62/876,502, filed on Jul. 19, 2019 in view of Zhang et al. (CN 103942770 A) with SEARCH machine translation and ZNAMENSKIY et al. (US 2019/0347524 A1) as applied in claims 1,3 further in view of Funka-Lea et al. (US 2019/0261945 A1) as applied in claim 9 further in view of Giner et al. (US 2021/0279880 A1) as applied in claims 10,11,17,19 further in view of FENG et al. (CN 109741343 A) with SEARCH machine translation:
Response to Amendment
The amendment was received 1/12/2026. Claims 1-20 pending:
PNG
media_image1.png
726
192
media_image1.png
Greyscale
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 0: Establish Broadest Reasonable interpretation as shown in footnotes throughout
Step 1
Claim 1 is a process
Step 2A, prong 1
The claim(s) recite(s) a mental process and math, boxed- in:
PNG
media_image2.png
817
921
media_image2.png
Greyscale
Step 2A, prong 2
This judicial exception is not integrated into a practical application because the additional elements—not boxed-in above-- such as “masks” “cropped training patches” “labels” “GPU” “memory” “sampling” “encoder”1 “decoder” -- is not improving the technical field [0002] of “performing segmentation” in view of applicant’s disclosure [0070]: “better classification precision” (neural network23).
Step 2B:
The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception because each additional element, such as training patches the “artificial neural network” “masks” “labels”, considered individually or with the mental process & math adheres to conventional practices as indicated in applicant’s specification’s background [0003][0004][0018]][0019]:
PNG
media_image3.png
1139
920
media_image3.png
Greyscale
PNG
media_image4.png
2061
1053
media_image4.png
Greyscale
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
Application Claims 1-20 rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-20 of U.S. Patent No. 11,941,813. Although the claims at issue are not identical, they are not patentably distinct from each other because the more specific claims of U.S. Patent No. 11,941,813 anticipate application claims 1-20. For example, application claim 1 is in Patent claims 1 and 13; application claim 2 in in patent claim 14; and application claim 3 in is Patent claim 15. Remaining claims are similarly rejected:
PNG
media_image5.png
2822
2092
media_image5.png
Greyscale
Response to Arguments
I. Rejection Under 35 USC 101
Step 2A: Prong One
Applicant's arguments filed 1/12/2026 have been fully considered but they are not persuasive:
In response to applicant's argument that the references fail to show certain features of the invention, it is noted that the features upon which applicant relies (i.e., page 7,8: “track deep learning parameters4, execute5….implement6”) are not recited in the rejected claim(s). Although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims. See In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993).
Applicants state in page 8 claim 1 does not set forth math. The examiner respectfully disagrees since claim 1 has math: “parameters”: see footnote.
Step 2A: Prong Two
Applicant's arguments filed 1/12/2026 have been fully considered but they are not persuasive:
In response to applicant's argument that the references fail to show certain features of the invention, it is noted that the features upon which applicant relies (i.e., page 9: ”the processing”7) are not recited in the rejected claim(s). Although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims. See In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993).
Step 2B
Applicant's arguments filed 1/12/2026 have been fully considered but they are not persuasive:
Applicant’s state that there is something more and points to ([0072] [0095]) segmentation and classification; however, segmentation and classification are more abstract ideas and “architecture”8 is not claimed:
[0072] The method 201 further includes training the segmentation model, at 211, using the cropped training patch, to adjust connections within the segmentation model. For example, the cropped training patch may have a corresponding known segmentation classification mask, and connections (e.g., parameters, values, weights, etc.) within the segmentation model may be adjusted by comparing a segmentation classification produced by processing the cropped training patch using the segmentation model, to the corresponding known segmentation classification mask.
[0095] For example, in some models cell segmentation may be first be performed with a watershed model after color deconvolution, using a UNet model architecture, etc., then each individual cell may be classified with a standard deep learning architecture such as ResNet, Inception, etc. by running the neural networks on each cell in a whole slide image containing millions of cells. In contrast, some example embodiments herein may combine cell segmentation and classification together (e.g., using UNet, etc.) with multiple cell type annotation labels. These approaches may greatly increase the speed (e.g., about 1000 times faster, etc.) compared to separately performing cell segmentation and then classifying each individual cell.
Thus, In response to applicant's argument that the references fail to show certain features of the invention, it is noted that the features upon which applicant relies (i.e., [0095]: “architecture”) are not recited in the rejected claim(s). Although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims. See In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993).
II. Double Patenting
Applicants state the claims (claim 1, line 13: “the training”) are not the same as US 11,914,813. The examiner respectfully disagrees since “the processing”9 (US 11,914,813, claim 1, line 29) expressing10 (i.e., putting IP into words) “processing cropped training patches” (US 11,914,813, claim 1, line 9) such that “processing” is ultimately adjectivally, prepositionally modified as the claimed “training”:
processing cropped training patches processing: thus the double patenting is maintained:
PNG
media_image6.png
2812
1063
media_image6.png
Greyscale
III. Rejections Under 35 USC 103
Applicant’s arguments, see remarks, page 12, filed 1/12/2026, with respect to the rejection(s) of claim(s) 1 under 35 USC 103 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of 35 USC 103:
Claim(s) 1,3 is/are rejected under 35 U.S.C. 103 as being unpatentable over Zhou et al. (US 2022/0262105 A1) with Related U.S. Application Data Provisional application No. 62/876,502, filed on Jul. 19, 2019 in view of Zhang et al. (CN 103942770 A) with SEARCH machine translation and ZNAMENSKIY et al. (US 2019/0347524 A1) wherein ZNAMENSKIY teaches a Markush alternative (A) or (B) or (C) of the Markush element [(A)&(B)&(C)]: “multiple cell type annotation labels”:
ZNAMENSKIY teaches an annotation problem in the prior art (“Disadvantage” [0006]) and the last difference d) of claim 1:
d) cell (segmentation and classification) (“of an already segmented object” [0031]: segmented cell) … (multiple) (A) cell (&)(B) type (&) (C) annotation (labels):
(A) (multiple) cell (“type” “change label” [0137] last S) (labels) &
(B) (multiple) (“cell” [0137] last S) type (“change label”) (labels) &
(C) (multiple) annotation (“change label” [0137] last S) (labels)
under the broadest reasonable interpretation of claim 1.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 1,3 is/are rejected under 35 U.S.C. 103 as being unpatentable over Zhou et al. (US 2022/0262105 A1) with Related U.S. Application Data Provisional application No. 62/876,502, filed on Jul. 19, 2019 in view of Zhang et al. (CN 103942770 A) with SEARCH machine translation and ZNAMENSKIY et al. (US 2019/0347524 A1):
PNG
media_image7.png
733
410
media_image7.png
Greyscale
Re 1., Zhou teaches via the Provisional A computer-implemented method of training a segmentation model, the method comprising:11
receiving training (“input” [0023]) data for the (“image”, pg. 5, 1st txt blk) segmentation model, the received (input) training data having a (“different” [0033]) training data size , and the segmentation model including deep learning (“Transfer learning” [0003]) parameters (“associated with training the network” [0073], last S: fig. 2) adjusted (comprised by “updated” [0068] penult S) by comparing (resulting in a “segmentation error” [0068] 2nd to last S) segmentation (pixel) classifications (comprising “segmentation classes” [0068] 3rd S) produced by the segmentation model to corresponding specified (cropped-“ground truth” [0068] 6th S) segmentation classification masks (704,706 “in any other suitable manner” [0051] penult S: fig. 5:108: fig. 7:
PNG
media_image8.png
830
898
media_image8.png
Greyscale
);
selecting (“randomly” [0033]) a patch size, where the (randomly) selected patch size is smaller (via “cropping” [0033]) than the (input) training data size;
selecting (“from a training dataset” [0023]) two random integers;
cropping a training patch (“by cropping” [0033]) from the (input) training data, the training patch (by cropping) having an origin according to the selected (“from a training dataset” [0023]) two random integers;
training the segmentation model (resulting in “the trained encoder-decoder network”, pg. 5, 1st txt blk) using the cropped training patch (by cropping) to adjust (weighted) connections (i.e., “updating12 weights” [0023]) within the (“neural network”13 [0057]) segmentation model , wherein the training includes a combination (“appended to an end of the decoder network” [0068 3rd S) of cell segmentation and classification with multiple14 (via “region(s)” [0020] 2nd to last S) cell15 type16 (&) annotation17 labels (being as labeled “region(s) represented”18 [0020] 2nd to last S) ; and
repeating (“with different training samples”, pg. 20 [0064]) the selection (“from a training dataset” [0023]) of two random integers, cropping of a training patch (“by cropping” [0033]), and training of the segmentation model to train the segmentation model (resulting in “the trained encoder-decoder network”, pg. 5, 1st txt blk) with a (“sizes” [0033[) plurality of randomly selected cropped training patches (“by cropping” [0033]), where each randomly selected cropped training patch (“by cropping” [0033]) is cropped from the training (“input” [0023]) data using an origin based19 on different random integers.
Zhou does not teach the difference20 in claim 1 of:
a) two random integers…
b) an origin…
c) two random integers…
d) cell (segmentation and classification)21 … (multiple) (A) cell22 (&) (B) type23 (&) (C) annotation24 (labels)…
e) two random integers…
f) an origin based25 on different random integers.
Zhang teaches the difference [a) b) c) e) f)] in claim 1 of:
a) two random integers (via “randomly selecting” “integer” “coordinates” of 0 to 99, pg. 2 [0010]:
PNG
media_image9.png
1228
1050
media_image9.png
Greyscale
…)
b) an origin (implicit given said “coordinates”: “top left corner…origin”, pg. 2 [0009]: (0,99))…
c) two random integers (via “second…randomly selecting” “integer value” “coordinates” “of 100”, pg. 5 [0057]:
PNG
media_image10.png
1227
1143
media_image10.png
Greyscale
)…
e) two random integers (via “randomly selecting coordinates26 of 100 integer values”, pg. 9. claim 1)…
f) an origin (implicit given said “coordinates”: “top left corner…origin”, pg. 2 [0009]: (0,99) based27 on different random integers (resulting in “cutting 100 square image block of 32 * 32”, pg. 5 [0057], after “randomly selecting coordinates of 100 integer”, pg. 5, [0057]).
Since Zhou teaches a reconstructed image destruction via “reconstructed image” “loss”28,. Pg. 18 [0060], one of skill in the art of re-building can make Zhou’s be as Zhang’s predictably recognizing the change to “effectively improve the reconstruction quality of”, Zhou [0001], the reconstructed image destruction:
PNG
media_image11.png
769
1053
media_image11.png
Greyscale
Zhou of the combination of Zhou,Zhang does not teach the last difference [d)] of claim 1:
d) cell (segmentation and classification)29 … (multiple)3031 (A) cell32 (&)3334 (B) type35 (&) (C) annotation36 (labels)37:
(A) (multiple) cell (labels) &
(B) (multiple) type (labels) &
(C) (multiple) annotation (labels)
under the broadest reasonable interpretation of claim 1.
ZNAMENSKIY teaches an annotation problem in the prior art (“Disadvantage” [0006]) and the last difference d) of claim 1:
d) cell (segmentation and classification)38 (“of an already segmented object” [0031]: segmented cell) … (multiple)3940 (A) cell41 (&)4243 (B) type44 (&) (C) annotation45 (labels)46:
(A) (multiple) cell (“type” “change label” [0137] last S) (labels) &
(B) (multiple) (“cell” [0137] last S) type (“change label”) (labels) &
(C) (multiple) annotation (“change label” [0137] last S) (labels)
under the broadest reasonable interpretation of claim 1.
Since Zhou of the combination of Zhou,Zhang teaches intensive annotation [0004] and labeling multiple training regions or patterns but not using the labeling/annotation, one of skill in the art of labeling can make Zhou’s of the combination of Zhou,Zhang be as ZNAMENSKIY’s seeing in the change “the user correcting a label”, ZNAMENSKIY [0031] penult S, and “to use said corrected annotation as training feedback47 in the machine learning algorithm” ZNAMENSKIY [0053] last S, so that subsequent or ongoing training operations of the deep learning machine can be altered48 (improved) or corrected and thus “superfluous… annotation… may be avoided”, ZNAMENSKIY [0023] last 2 Ss, thus addressing the annotation problem as recognized by Zhou and ZNAMENSKIY:
PNG
media_image12.png
2875
962
media_image12.png
Greyscale
Re 3., Zhou of the combination of Zhou,Zhang teaches The method of claim 1, wherein (“randomly” [0033]) selecting the patch size includes49
(A) setting the (“10 pixels x 10 pixels”, pg. 14 [0047]) patch size as half of the training data (“20 pixels x 20 pixels”, pg. 9 [0033]) size or
(B) setting the patch size according to available memory in a graphic processor unit (GPU) (given that Markush alternative (A) is taught the Markush element [(A) or (B)] is taught; hence, Markush alternative (B) is taught).
Claim(s) 2 is/are rejected under 35 U.S.C. 103 as being unpatentable over Zhou et al. (US 2022/0262105 A1) with Related U.S. Application Data Provisional application No. 62/876,502, filed on Jul. 19, 2019 in view of Zhang et al. (CN 103942770 A) with SEARCH machine translation and ZNAMENSKIY et al. (US 2019/0347524 A1) as applied in claims 1,3 further in view of Kirby et al. (US 2020/0357516 A1):
PNG
media_image13.png
733
467
media_image13.png
Greyscale
Re 2., Zhou of the combination of Zhou,Zhang,ZNAMENSKIY teaches The method of claim 1, wherein the training (resulting in “the trained encoder-decoder network”, pg. 5, 1st txt blk) is repeated (“with different training samples”, pg. 20 [0064]) until the plurality of randomly selected cropped training patches (“by cropping” [0033]) reaches (via a “convergence”, pg. 20 [0064]) a specified training (“sample”, pg. 9 [00330) patch count threshold, where the specified training patch count threshold is indicative that the (weighted) connections within the (“image”, pg. 5, 1st txt blk) segmentation (neural network) model has been sufficiently adjusted (i.e., “updating50 weights” [0023], incorporating more sufficient information) to generate correct output classifications (“reducing false positives” pg. 7 [0025]) according to the (input) training data.
Zhou of the combination of Zhou, Zhang, ZNAMENSKIY does not teach the difference of claim 2: “a specified… count threshold, where the specified …count threshold is indicative that”.
Kirby teaches the difference of claim 2:
a specified … count threshold (or “class”51 “threshold” represented as fig. 4:412: “CONFIDENCE > THRESHOLD”), where the specified … (class-)count threshold is indicative that (“a count that is at least a threshold number (e.g., at least 100 image patches belonged to a particular class)” [0045] last S.
Since Zhou of the combination of Zhou, Zhang, ZNAMENSKIY teaches an image patch, one of skill in image patches can make Zhou’s of the combination of Zhou, Zhang, ZNAMENSKIY be as Kirby’s predictably recognizing the change “to excel in image recognition tasks, without requiring time-intensive selective feature extraction by humans”, Kirby [0010] last S.
Claim(s) 4 is/are rejected under 35 U.S.C. 103 as being unpatentable over Zhou et al. (US 2022/0262105 A1) with Related U.S. Application Data Provisional application No. 62/876,502, filed on Jul. 19, 2019 in view of Zhang et al. (CN 103942770 A) with SEARCH machine translation and ZNAMENSKIY et al. (US 2019/0347524 A1) as applied in claims 1,3 further in view of HU et al. (CN 109191476 A) with SEARCH machine translation:
PNG
media_image14.png
733
467
media_image14.png
Greyscale
Re 4., Zhou of the combination of Zhou,Zhang,ZNAMENSKIY teaches The method of claim 1, wherein training the segmentation (neural network) model includes using (“any suitable number”, pg. 9 [0031]) (A) thirty or (B) less training images.
Zhou of the combination of Zhou,Zhang, ZNAMENSKIY does not teach the Markush element:
(A) thirty or (B) less training images.
HU teaches Markush alternative (A):
(“using”) Thirty training (“data”) images (i.e., “30 continuous slices” “training data set”, pg. 8, 8th txt blk,--as shown in fig. 5-- “to increase the image number of the training set”, pg. 8, 8th txt blk).
Since Zhou of the combination of Zhou,Zhang, ZNAMENSKIY suggests other “training images”, pg. 4 [0020], one of skill in the art of training images can make Zhou’s use of training images of the combination of Zhou,Zhang, ZNAMENSKIY be as HU’s predictably recognizing the change training a model using increased image-data quality via “to train…using data enhancement”52, HU, pg. 8, 8th txt blk; thus resulting in an increased-in-quality model.
Claim(s) 5 is/are rejected under 35 U.S.C. 103 as being unpatentable over Zhou et al. (US 2022/0262105 A1) with Related U.S. Application Data Provisional application No. 62/876,502, filed on Jul. 19, 2019 in view of Zhang et al. (CN 103942770 A) with SEARCH machine translation and ZNAMENSKIY et al. (US 2019/0347524 A1) as applied in claims 1,3 further in view of JAGANATHAN et al. (US 2023/0004749 A1) with Related U.S. Application Data: provisional application No. 62/821,766, filed on Mar. 21, 2019 further in view of PRASAD et al. (US 2013/0156305 A1):
PNG
media_image15.png
733
651
media_image15.png
Greyscale
Re 5., Zhou of the combination of Zhou,Zhang, ZNAMENSKIY teaches The method of claim 1, further comprising performing circumsolar53 image anomaly (image) segmentation by stacking an optical density space image on a normally captured image to define a multi-dimensional input tensor having multiple channels.
Zhou of the combination of Zhou,Zhang,ZNAMENSKIY does not teach the difference of claim 5:
--circumsolar54 image anomaly… by stacking an optical density space image on a normally captured image to define a multi-dimensional input tensor having multiple channels--.
Jaganathan teaches the difference of claim 5:
circumsolar55 image (via “satellite imaging”, pg. 78 [001031]) anomaly… by stacking an optical (“pixels”, pg. 12 [00261]) density space (via “satellite imaging”, pg. 78 [001031]) image (resulting in “stacked” “image patches”, pg. 50 [00730]) on a normally captured image (resulting in “stacked” “image patches”, pg. 50 [00730]) to define a multi-dimensional input tensor (“of any dimensionality”, pg. 54, 1st txt blk) having multiple channels (via fig. 111: flattened image representations:
PNG
media_image16.png
1003
688
media_image16.png
Greyscale
Since Zhou of the combination of Zhou,Zhang, ZNAMENSKIY suggests a Convolutional Neural Network (CNN), “a fully convolutional network” [0061], one of skill in the art of CNNs can make Zhou’s of the combination of Zhou,Zhang, ZNAMENSKIY be as Jaganathan’s predictably recognizing the change “contributes strongly to accurate…classification”, Jaganathan, pg. 10 [00237].
The combination of Zhou of the combination of Zhou,Zhang, ZNAMENSKIY, Jaganathan does not teach “anomaly”.
Prasad teaches “anomaly” (“with respect to a first background feature within which it is embedded in block S106”, [0078]: Fig. 7:S106).
Since Jaganathan of the combination of Zhou,Zhang, ZNAMENSKIY,Jaganathan teaches “methods… can be applied” to said “satellite imaging”, one of skill in the art of satellite imaging can make Jaganathan’s of the combination of Zhou,Zhang, ZNAMENSKIY,Jaganathan be as Prasad’s predictably recognizing the change providing a “simplification of image… segmentation…which it is easier to detect anomalies than in the whole image taken all at once”, Prasad [0089] last S.
Claim(s) 6 is/are rejected under 35 U.S.C. 103 as being unpatentable over Zhou et al. (US 2022/0262105 A1) with Related U.S. Application Data Provisional application No. 62/876,502, filed on Jul. 19, 2019 in view of Zhang et al. (CN 103942770 A) with SEARCH machine translation and ZNAMENSKIY et al. (US 2019/0347524 A1) as applied in claims 1,3 further in view of JAGANATHAN et al. (US 2023/0004749 A1) with Related U.S. Application Data: provisional application No. 62/821,766, filed on Mar. 21, 2019 as applied in claim 5 further in view of Loskutoff et al. (US 4,791,068):
PNG
media_image17.png
733
660
media_image17.png
Greyscale
Re 6., Jaganathan of the combination of Zhou,Zhang, ZNAMENSKIY, Jaganathan teaches The method of claim 1, further comprising performing cell (via “cell biology”, pg. 78 [001031]) growth monitoring (image) segmentation by stacking different images (said resulting in “stacked” “image patches”, pg. 50 [00730]) having different focusing (“for an optical system…along the z axis”, pg. 87 [001106]) areas to define a multi-dimensional input tensor having multiple channels (mapped in claim 5).
Jaganathan of the combination of Zhou,Zhang, ZNAMENSKIY, Jaganathan does not teach the difference of claim 6 :“performing…growth monitoring…having different…areas”.
Loskutoff teaches:
performing…growth monitoring (so that “those areas containing single cells were monitored on consecutive days for cell growth”, c.19,ll. 40-45) …having different…areas (“of each of the cellular droplets”, c. 19, ll.35-40).
Since Jaganathan of the combination of Zhou,Zhang, ZNAMENSKIY, Jaganathan teaches said cell biology, one of skill in the art of cell biology and microscopes can make Jaganathan’s of the combination of Zhou,Zhang, ZNAMENSKIY, Jaganathan be as Loskutoff’s predictably recognizing the change “readily lends itself to screening large numbers of samples in a rapid and reproducible manner”, Loskutoff, c4., ll. 45-55.
Claim(s) 7 is/are rejected under 35 U.S.C. 103 as being unpatentable over Zhou et al. (US 2022/0262105 A1) with Related U.S. Application Data Provisional application No. 62/876,502, filed on Jul. 19, 2019 in view of Zhang et al. (CN 103942770 A) with SEARCH machine translation and ZNAMENSKIY et al. (US 2019/0347524 A1) as applied in claims 1,3 further in view of JAGANATHAN et al. (US 2023/0004749 A1) with Related U.S. Application Data: provisional application No. 62/821,766, filed on Mar. 21, 2019 as applied in claim 5 further in view of Zemenchik (US 2020/0107490 A1):
PNG
media_image18.png
733
673
media_image18.png
Greyscale
Re 7., Jaganathan of the combination of Zhou,Zhang, ZNAMENSKIY, Jaganathan teaches The method of claim 1, further comprising performing (“A and C overlap” “emission spectra”, pg. 86 [001095]) multi (overlapping)-spectral (“MRI”, Zhou, pg. 19, 1st txt blk) imaging segmentation (“preventing cross-talk between digital image sets from different cycles”, pg. 11 [00244]: as shown in fig. 111, above), where each (cross-talk) multi-spectral band (MRI) image (A or C) includes between ten and one hundred spectral (“wavelength”) bands (“(image/imaging channel)”, pg. 13 [00275]), wherein “one of the image channels is one of a plurality of filter wavelength bands”, pg. 104 [001236]), by stacking (said resulting in “stacked” “image patches”, pg. 50 [00730]) the multi-spectral band (MRI) images (A and C) to define a multi-dimensional input tensor having multiple channels (as mapped in claim 5).
Jaganathan of the combination of Zhou,Zhang, ZNAMENSKIY,Jaganathan does not teach the difference of claim 7:
“between ten and one hundred”.
Zemenchik teaches the difference:
between ten (“narrowly-spaced” [0014] 5th S) and one hundred (spectral bands).
Since Jaganathan of the combination of Zhou,Zhang,ZNAMENSKIY,Jaganathan teaches said methods can be applied to satellite imaging, one of skill in the art of satellite imaging can make Jaganathan’s of the combination of Zhou,Zhang, ZNAMENSKIY,Jaganathan be as Zemenchik’s predictably recognizing the change to “further enhance the accuracy of tillage operations, thereby increasing the yield potential of the subsequently harvested agricultural products”, Zemenchik [0039] last S.
Claim(s) 8 is/are rejected under 35 U.S.C. 103 as being unpatentable over Zhou et al. (US 2022/0262105 A1) with Related U.S. Application Data Provisional application No. 62/876,502, filed on Jul. 19, 2019 in view of Zhang et al. (CN 103942770 A) with SEARCH machine translation and ZNAMENSKIY et al. (US 2019/0347524 A1) as applied in claims 1,3 further in view of JAGANATHAN et al. (US 2023/0004749 A1) with Related U.S. Application Data: provisional application No. 62/821,766, filed on Mar. 21, 2019 as applied in claim 5 further in view of Yuan (US 2017/0365053 A1):
PNG
media_image19.png
733
673
media_image19.png
Greyscale
Re 8., Zhou of the combination of Zhou,Zhang, ZNAMENSKIY,Jaganathan teaches The method of claim 1, further comprising performing H & E whole-slide (“that holds the input DNA fragments during the sequencing process”, Jaganathan, pg. 14 [00284]) imaging (via “optical imaging device”, Jaganathan, pg. 88 [00112]) by tiling the (input) training data into smaller training patches (“by cropping” [0033]), wherein an output image includes
at least one of56
A) a lymphocyte cell (via “cell” “image” “data”, Jaganathan, pg. 78 [001031]),
B) epithelial cell (via “cell” “image” “data”, Jaganathan, pg. 78 [001031]) or
C) stromal cell (via “cell” “image” “data”, Jaganathan, pg. 78 [001031]), and
at least one of
D) connective tissue (“sample”, Jaganathan, pg. 83 [001068]),
E) lymphoid tissue (“sample”, Jaganathan, pg. 83 [001068]) or
F) smooth muscle tissue (“sample”, Jaganathan, pg. 83 [001068]).
Zhou of the combination of Zhou,Zhang,ZNAMENSKIY,Jaganathan does not teach the difference of claim 8:
“H & E whole…
lymphocyte…
epithelial…
stroma…and
connective…
lymphoid…
smooth muscle”.
Yuan teaches the difference of claim 8 of:
H & E whole(-tumor section slides” [0070])…
A) lymphocyte (“encompassing fibroblasts and endothelial cells” [0070])…
B) epithelial (encompassed by said lymphocyte)…
C) stroma (“compartments” [0078] last S)…and
D) connective (comprised by said fibrobalsts)…
E) lymphoid (comprising said encompassing fibroblasts57 & epithelial lymphocyte58)…
F) smooth muscle.
Since Jaganathan of the combination of Zhou,Zhang, ZNAMENSKIY,Jaganathan teaches a slide and tissue, one of skill in the art of slides and tissues can make Jaganathan’s of the combination of Zhou,Zhang, ZNAMENSKIY, Jaganathan be as Yuan’s predictably recognizing the change “has improved predictive power in cancer prognosis compared with previous indicators of immune infiltration”, Yuan [0058] 3rd S.
Claim(s) 9 is/are rejected under 35 U.S.C. 103 as being unpatentable over Zhou et al. (US 2022/0262105 A1) with Related U.S. Application Data Provisional application No. 62/876,502, filed on Jul. 19, 2019 in view of Zhang et al. (CN 103942770 A) with SEARCH machine translation and ZNAMENSKIY et al. (US 2019/0347524 A1) as applied in claims 1,3 further in view of Funka-Lea et al. (US 2019/0261945 A1):
PNG
media_image20.png
733
673
media_image20.png
Greyscale
Re 9., Zhou of the combination of Zhou,Zhang, ZNAMENSKIY teaches The method of claim 1, wherein the (image) segmentation (neural net) model includes a plurality of sequential encoder down-sampling blocks and a plurality of sequential decoder up-sampling (“e.g., 2D upsamplings, 3D upsamplings, etc.”, pg. 5, last txt blk) blocks.
Zhou of the combination of Zhou,Zhang, ZNAMENSKIY does not teach:
“sequential…down-sampling blocks…
sequential…blocks”.
Funka teaches the difference of claim 9:
(“consecutive” [0057] 2nd S) sequential…down-sampling (Unet-like) blocks (fig. 5: G3d: rectangles)…
(“consecutive” [0057] 2nd S) sequential…(UNet-like) blocks (fig. 5:G3d: rectangles).
Since Zhou of the combination of Zhou,Zhang, ZNAMENSKIY teaches a neural network, one of skill in the art of neural networks can make Zhou’s of the combination of Zhou,Zhang, ZNAMENSKIY be as Funka’s predictably recognizing the change “making the…resulting segmentation…more accurate”, Funka [0021] last two Ss:
PNG
media_image21.png
1034
1045
media_image21.png
Greyscale
Claim(s) 10,11,17,19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Zhou et al. (US 2022/0262105 A1) with Related U.S. Application Data Provisional application No. 62/876,502, filed on Jul. 19, 2019 in view of Zhang et al. (CN 103942770 A) with SEARCH machine translation and ZNAMENSKIY et al. (US 2019/0347524 A1) as applied in claims 1,3 further in view of Funka-Lea et al. (US 2019/0261945 A1) as applied in claim 9 further in view of Giner et al. (US 2021/0279880 A1):
PNG
media_image22.png
733
673
media_image22.png
Greyscale
Re 10., Zhou of the combination of Zhou,Zhang, ZNAMENSKIY,Funka teaches The method of claim 9, further comprising processing a multi-dimensional input tensor via the plurality of sequential encoder down-sampling (U-Net) blocks to generate an output tensor, wherein:
the multi-dimensional input tensor includes at least a first (“arbitrary”, [0021] last S-length/line) dimension59, a second (“second” is understood given “dimension”) dimension (“of dimensions”, [0021] last S, comprising a 2nd plane dimension or 2nd surface dimension) and a plurality of channels; and
the output tensor includes at least one segmentation (“class”) classification (“layer” [0068] 3rd S).
Zhou of the combination of Zhou,Zhang, ZNAMENSKIY,Funka does not teach the difference of claim 10:
“a multi-dimensional input tensor…
an output tensor…
the multi-dimensional input tensor includes… a plurality of channels; and the output tensor includes”.
Giner teaches the difference of claim 10:
a multi-dimensional input tensor (“X” [0061], 2nd S, i.e., “input volume 404” [0061]: figs. 3,4F:404: “Input Volume”)…
an output tensor (“to have 4 channels” [0064] 2nd S)…
the multi-dimensional input tensor (“X” [0061], 2nd S) includes… a plurality of (“C”) channels (“to having FM feature maps” [0059] 1st S via fig. 4F: 412: “Init Conv 8 kernels”); and
the output tensor (“to have 4 channels” [0064] 2nd S) includes (said 4 channels “with shape (W,H,D,4)” [0064] last S).
Since Funka of the combination of Zhou,Zhang, ZNAMENSKIY,Funka teaches a U-Net, one of skill in the art of U-Nets:
PNG
media_image23.png
822
722
media_image23.png
Greyscale
can make Funka’s of the combination of Zhou,Zhang,ZNAMENSKIY Funka be as Giner’s predictably recognizing the change “to increase the accuracy of tumor segmentation”, Giner [0057] 1st S:
PNG
media_image24.png
1946
1135
media_image24.png
Greyscale
Re 11., Zhou of the combination of Zhou,Zhang, ZNAMENSKIY,Funka,Giner teaches The method of claim 10, wherein processing the multi-dimensional input tensor includes passing (as shown by the arrows in Giner’s figures 3, 4F) the multi-dimensional input tensor through the plurality of sequential encoder
down-sampling blocks and the plurality of sequential decoder up-sampling blocks of segmentation model to generate the output tensor.
Re 17.,Giner of the combination of Zhou,Zhang, ZNAMENSKIY ,Funka,Giner teaches The method of claim 10, wherein the multi-dimensional input tensor (“X” [0061], 2nd S) includes at least one of
gaming data for classifying player behavior and
medical data (or “Clinical data” [0081] 3rd to last S) for classifying X-Rays and/or MRIs (resulting in “patch” “labels” wherein “The patches can include a T1-Gd patch 710, T2-FLAIR patch 720, T1-weighted patch 730, T2-weighted patch 740”: [0088] 3rd & 4th Ss: fig. 7:700, “referred to as MRI modalities” [0026] 2nd S).
Re 19., Giner of the combination of Zhou,Zhang, ZNAMENSKIY ,Funka,Giner teaches The method of claim 10, wherein the output tensor (“to have 4 channels” [0064] 2nd S) includes at least two segmentation classifications (or “4 classes” [0064] penult S corresponding to a label-patch segmented image, fig. 8, a “patch 700” “segmented image” [0088] last S).
Claim(s) 12 is/are rejected under 35 U.S.C. 103 as being unpatentable over Zhou et al. (US 2022/0262105 A1) with Related U.S. Application Data Provisional application No. 62/876,502, filed on Jul. 19, 2019 in view of Zhang et al. (CN 103942770 A) with SEARCH machine translation and ZNAMENSKIY et al. (US 2019/0347524 A1) as applied in claims 1,3 further in view of Funka-Lea et al. (US 2019/0261945 A1) as applied in claim 9 further in view of Giner et al. (US 2021/0279880 A1) as applied in claims 10,11,17,19 further in view of Kerr et al. (US 2004/0148197 A1):
PNG
media_image25.png
733
711
media_image25.png
Greyscale
Re 12., Giner of the combination of Zhou,Zhang, ZNAMENSKIY,Funka,Giner teaches The method of claim 10, wherein:
the multi-dimensional input tensor (“X” [0061], 2nd S, i.e., “input volume 404” [0061]: figs. 3,4F:404: “Input Volume”) includes a video sequence (“that are applied to the same axial slice 200” [0027] 2nd S or “MR sequences” [0109] last S), and
processing the multi-dimensional input tensor (“X” [0061], 2nd S, i.e., “input volume 404” [0061]: figs. 3,4F:404: “Input Volume”) includes processing the video sequence (“that are applied to the same axial slice 200” [0027] 2nd S or “MR sequences” [0109] last S) for at least one of60
A) classifying (via a “labeling scheme” [0080] last S) behavior,
B) classifying (via a “labeling scheme” [0080] last S) vehicles,
C) person recognition (via an “identification process” [0020] 1st S), and
D) item recognition (via an “identification process” [0020] 1st S).
Giner of the combination of Zhou,Zhang, ZNAMENSKIY,Funka,Giner does not teach the difference of claim 12:
“video…
processing the video…
A) behavior…
B) vehicles…
C) person…
D) item”.
Kerr teaches the difference of claim 12:
video (via “the term content refers to any form of video” [0023], “ such as diagnostic images of the type that are generated by systems such as Computer Tomography, Ultra Sound, Magnetic Resonance Imaging” [0005] 2nd S)…
(“provides a content bearing signal to signal processor 32” [0026] 3rd S) processing the video (“and adapts the content for presentation” [0026] 5th S) …
A) behavior…
B) vehicles…
C) (“Profile information is assigned to each” [0034] 4th S)61 person…
D) item.
Since Giner teaches MR sequences, one of skill in the art of MR sequences can make Giner’s of the combination of Zhou,Zhang, ZNAMENSKIY,Funka,Giner be as Kerr’s predictably recognizing the change “to improve the perceived appearance of presented content. Such adjustments can be made based upon the type of content, and profile information. Similarly controller 34 can also be adapted to adjust and/or to control the operation of enhanced display apparatus 228 or light box 232 so that they do not present content to people who do not have appropriate viewing privileges or who are not authenticated.”, Kerr [0071].
Claim(s) 13 is/are rejected under 35 U.S.C. 103 as being unpatentable over Zhou et al. (US 2022/0262105 A1) with Related U.S. Application Data Provisional application No. 62/876,502, filed on Jul. 19, 2019 in view of Zhang et al. (CN 103942770 A) with SEARCH machine translation and ZNAMENSKIY et al. (US 2019/0347524 A1) as applied in claims 1,3 further in view of Funka-Lea et al. (US 2019/0261945 A1) as applied in claim 9 further in view of Giner et al. (US 2021/0279880 A1) as applied in claims 10,11,17,19 further in view of Roth et al. (US 11,816,185 B1):
PNG
media_image26.png
733
711
media_image26.png
Greyscale
Re 13., Giner of the combination of Zhou,Zhang, ZNAMENSKIY,Funka,Giner teaches The method of claim 10, wherein:
the multi-dimensional input tensor (“X” [0061], 2nd S, i.e., “input volume 404” [0061]: figs. 3,4F:404: “Input Volume”) includes62 (A) radar and/or (B) sonar data; and
processing the multi-dimensional input tensor (“X” [0061], 2nd S, i.e., “input volume 404” [0061]: figs. 3,4F:404: “Input Volume”) includes processing the (A) radar and/or (B) sonar data (via a “processor” [0067] 5th S) for object recognition (via an “identification process” [0020] 1st S).
Giner of the combination of Zhou,Zhang,ZNAMENSKIY,Funka,Giner does not teach the difference of claim 13:
(A) radar and/or (B) sonar…
The (A) radar and/or (B) sonar.
Roth teaches the difference of claim 13:
(A) radar (“captured”, c. 11, ll. 35-40) and/or (B) sonar…
the (A) radar (“captured”, “MRI” “volumetric data”, c. ll. 28-38) and/or (B) sonar (via c. 11, ll. 35-40:
PNG
media_image27.png
890
866
media_image27.png
Greyscale
Since Giner of the combination of Zhou,Zhang, ZNAMENSKIY,Funka,Giner teaches MRI and “other” “imaging” “devices” (Giner [0003] 4th S), one of skill in the art of imaging devices can make Giner’s of the combination of Zhou,Zhang, ZNAMENSKIY, Funka,Giner be as Roth’s predictably recognizing the change “to boost the robustness of the model on supervised volumetric segmentation tasks.”, Roth, c.11,ll. 19-21:
Claim(s) 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Zhou et al. (US 2022/0262105 A1) with Related U.S. Application Data Provisional application No. 62/876,502, filed on Jul. 19, 2019 in view of Zhang et al. (CN 103942770 A) with SEARCH machine translation and ZNAMENSKIY et al. (US 2019/0347524 A1) as applied in claims 1,3 further in view of Funka-Lea et al. (US 2019/0261945 A1) as applied in claim 9 further in view of Giner et al. (US 2021/0279880 A1) as applied in claims 10,11,17,19 further in view of Vaconcelos et al. (US 2021/0012769 A1):
PNG
media_image28.png
733
789
media_image28.png
Greyscale
Re 14., Giner of the combination of Zhou,Zhang, ZNAMENSKIY,Funka,Giner teaches The method of claim 10, wherein the multi-dimensional input tensor (“X” [0061], 2nd S, i.e., “input volume 404” [0061]: figs. 3,4F:404: “Input Volume”) includes audio data received from different microphones.
Giner of the combination of Zhou,Zhang, ZNAMENSKIY ,Funka,Giner does not teach the difference of claim 14:
“audio…from different microphones”.
Vaconcelos teaches the difference of claim 14:
audio (“featuring an utterance and…for the generation of an audio feature tensor” [0082] penult S)…from different (“one or more”) microphones.
Since Giner teaches a user interface [0078], one of skill in the art of interfaces and tensors can make Giner’s of the combination of Zhou,Zhang, ZNAMENSKIY ,Funka,Giner be as Vaconcelos’ predictably recognizing the change “quickly becoming a viable option for providing a user interface”, Vaconcelos [0002] last S.
Claim(s) 15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Zhou et al. (US 2022/0262105 A1) with Related U.S. Application Data Provisional application No. 62/876,502, filed on Jul. 19, 2019 in view of Zhang et al. (CN 103942770 A) with SEARCH machine translation and ZNAMENSKIY et al. (US 2019/0347524 A1) as applied in claims 1,3 further in view of Funka-Lea et al. (US 2019/0261945 A1) as applied in claim 9 further in view of Giner et al. (US 2021/0279880 A1) as applied in claims 10,11,17,19 further in view of Zhang et al. (US 2019/0147250 A1), herein referred to as Zhang II:
PNG
media_image29.png
733
789
media_image29.png
Greyscale
Re 15., Giner of the combination of Zhou, Zhang I, ZNAMENSKIY,Funka,Giner teaches The method of claim 10, wherein:
the multi-dimensional input tensor (“X” [0061], 2nd S, i.e., “input volume 404” [0061]: figs. 3,4F:404: “Input Volume”) includes vehicle control data (“for providing images of segmented tumor for at least one display 448” [0033] 2nd S); and
processing the multi-dimensional input tensor (“X” [0061], 2nd S, i.e., “input volume 404” [0061]: figs. 3,4F:404: “Input Volume”) includes processing the vehicle control data (“for providing images of segmented tumor for at least one display 448” [0033] 2nd S) for at least one of63
A) object recognition,
B) pattern recognition,
C) navigation and/or D) steering control (“to control the operation of, data processing apparatuses” [0074] 2nd S),
E) route planning, and
F) braking in emergency situations.
Giner of the combination of Zhou, Zhang, ZNAMENSKIY,Funka,Giner does not teach:
vehicle control…
vehicle control…
A) object (recognition)…
B) pattern (recognition)…
C) navigation and/or D) steering…
E) route planning…
F) braking in emergency situations.
Zhang II teaches the difference of claim 15:
vehicle control (“systems 740” [0066]: fig. 7:740: on right-side)…
vehicle control (“to operate the vehicle 710” [0071]: fig. 7:710: a car)…
A) (“lamp”) object (“identified” [0050] 6th S) (recognition)…
B) pattern (recognition)…
C) navigation and/or D) (“The vehicle controller can, for example, translate the motion plan into instructions for”) steering (“control” [0076] 4th S: fig. 7:740)…
E) route planning…
F) braking in emergency situations64.
Since Zhou of the combination (as illustrated in the rejection of claim 10) of Zhou, Zhang I, ZNAMENSKIY,Funka,Giner teaches that the “trained encoder-decoder network…can then be used…to perform any suitable task, such as image classification, image segmentation, etc., pg. 5, 1st txt blk, one of skill in the art of encoder-decoder networks can make Zhou’s of the combination (as illustrated in the rejection of claim 10) of Zhou, Zhang I, ZNAMENSKIY,Funka,Giner be as Zhang II’s predictably recognizing the change “to more accurately perform semantic segmentation of the portion(s) of an environment.”, Zhang II [0035] 2nd S:
PNG
media_image30.png
2700
1135
media_image30.png
Greyscale
Claim(s) 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Zhou et al. (US 2022/0262105 A1) with Related U.S. Application Data Provisional application No. 62/876,502, filed on Jul. 19, 2019 in view of Zhang et al. (CN 103942770 A) with SEARCH machine translation and ZNAMENSKIY et al. (US 2019/0347524 A1) as applied in claims 1,3 further in view of Funka-Lea et al. (US 2019/0261945 A1) as applied in claim 9 further in view of Giner et al. (US 2021/0279880 A1) as applied in claims 10,11,17,19 further in view of Moustafa et al. (US 2022/0126864 A1) with Related U.S. Application Data: Provisional application No. 62/826,955, filed on Mar. 29, 2019:
PNG
media_image31.png
733
789
media_image31.png
Greyscale
Re 16., Zhou, Zhang I, ZNAMENSKIY,Funka,Giner teaches The method of claim 10, wherein:
the multi-dimensional input tensor (“X” [0061], 2nd S, i.e., “input volume 404” [0061]: figs. 3,4F:404: “Input Volume”) includes behavior data; and
processing (via “downsampling operations”65 [0058] 2nd S) the multi-dimensional input tensor (“X” [0061], 2nd S, i.e., “input volume 404” [0061]: figs. 3,4F:404: “Input Volume”) includes processing (via “downsampling operations”66 [0058] 2nd S) the behavior data for at least one of67
A) aggressive behavior classification (via a “labeling68 scheme” [0080] last S) and
B) concealed items classification (via a “labeling69 scheme” [0080] last S).
Giner of the combination of Zhou, Zhang I, ZNAMENSKIY,Funka,Giner does not teach the difference of claim 16:
--behavior (data)…
A) aggressive behavior (classification)…
B) concealed items (classification)--.
Moustafa teaches via Provisional application No. 62/826,955 the difference of claim 16:
(“vehicle”) behavior (“data transfer”, pg. 41, 1st txt blk) (data)…
A) (“recognize”70) aggressive behavior (“such as aggressive honking, yelling, or unsafe situations such as screeching brakes”, pg. 84 [00170]:
PNG
media_image32.png
439
759
media_image32.png
Greyscale
) (classification)…
B) concealed items (classification)71.
Since Zhou of the combination (as illustrated in the rejection of claim 10) of Zhou, Zhang I, ZNAMENSKIY,Funka,Giner teaches that the “trained encoder-decoder network…can then be used…to perform any suitable task, such as image classification, image segmentation, etc., pg. 5, 1st txt blk, one of skill in the art of encoder-decoder networks can make Zhou’s of the combination (as illustrated in the rejection of claim 10) of Zhou, Zhang I, ZNAMENSKIY,Funka,Giner be as Moustafa’s predictably recognizing the change to “enhance the system’s intelligence to report unknown situations (time-based events that were not been seen by the system previously (either at training or test phases)”, Moustafa, pg. 162, 1st txt box:
PNG
media_image33.png
2511
1135
media_image33.png
Greyscale
Claim(s) 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Zhou et al. (US 2022/0262105 A1) with Related U.S. Application Data Provisional application No. 62/876,502, filed on Jul. 19, 2019 in view of Zhang et al. (CN 103942770 A) with SEARCH machine translation and ZNAMENSKIY et al. (US 2019/0347524 A1) as applied in claims 1,3 further in view of Funka-Lea et al. (US 2019/0261945 A1) as applied in claim 9 further in view of Giner et al. (US 2021/0279880 A1) as applied in claims 10,11,17,19 further in view of Zhang et al. (WO 2020/007277 A1), herein referred to as Zhang III, with SEARCH machine translation:
PNG
media_image34.png
733
789
media_image34.png
Greyscale
Re 18., Funka of the combination of Zhou,Zhang I, ZNAMENSKIY,Funka,Giner teaches The method of claim 10, where each encoder down-sampling block (Funka: fig. 5: G3d: rectangles) includes at least one of72
A) a Residual Network (ResNet) Basic block,
B) a ResNet Bottleneck block,
C) a simple two convolution block,
D) a Dense Convolutional Network (DenseNet) block, and
E) a ResNeXt block.
Funka of the combination of Zhou,Zhang I, ZNAMENSKIY,Funka,Giner does not teach the difference of claim 18 :
A) Residual Network (ResNet) Basic (block),
B) ResNet Bottleneck (block),
C) simple two convolution (block),
D) Dense Convolutional Network (DenseNet) (block), and
E) ResNeXt (block).
Zhang III teaches the difference of claim 18 of:
A) Residual Network (ResNet) Basic (block),
B) ResNet Bottleneck (block),
C) simple two convolution (block),
D) (“a network structure based on”) Dense Convolutional Network (DenseNet) (“and Unet (full convolutional neural network).”, pg. 4, 7th txt blk) (block), and
E) ResNeXt (block)73.
Since Zhou of the combination (as illustrated in the rejection of claim 10) of Zhou, Zhang I, ZNAMENSKIY,Funka,Giner teaches that the “trained encoder-decoder network…can then be used…to perform any suitable task, such as image classification, image segmentation, etc., pg. 5, 1st txt blk, one of skill in the art of encoder-decoder networks can make Zhou’s of the combination (as illustrated in the rejection of claim 10) of Zhou, Zhang I, ZNAMENSKIY,Funka,Giner be as Zhang III’s predictably recognizing the change “improves the transmission efficiency of information and gradients in the network…and…to improve the usampling information deficiency”, Zhang III, pg. 4, 7th txt blk.
Claim(s) 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Zhou et al. (US 2022/0262105 A1) with Related U.S. Application Data Provisional application No. 62/876,502, filed on Jul. 19, 2019 in view of Zhang et al. (CN 103942770 A) with SEARCH machine translation and ZNAMENSKIY et al. (US 2019/0347524 A1) as applied in claims 1,3 further in view of Funka-Lea et al. (US 2019/0261945 A1) as applied in claim 9 further in view of Giner et al. (US 2021/0279880 A1) as applied in claims 10,11,17,19 further in view of FENG et al. (CN 109741343 A) with SEARCH machine translation:
PNG
media_image35.png
735
789
media_image35.png
Greyscale
Re 20., Giner of the combination of Zhou,Zhang I, ZNAMENSKIY,Funka,Giner The method of claim 19, wherein the output tensor (“to have 4 channels” [0064] 2nd S) includes (said 4 channels “with shape (W,H,D,4)” [0064] last S) at least one of the segmentation (“class”) classifications (“layer”, Zhou: [0068] 3rd S) includes a segmentation74 (“brain mask”, Giner [0085] 2nd to last S and “patch” “mask” [0087] 3rd S) mask (fig. 6:630,640: brain & patch masks extracting “patches (masked)”) for an image (“concatenation” Giner [0085] 2nd to last S).
Giner of the combination of Zhou,Zhang I, ZNAMENSKIY,Funka,Giner does not teach “segmentation”75.
Feng teaches:
(“coarse”) segmentation76 (“mask”, pg. 2, 8th txt blk).
Since Zhou, Funka,Giner of the combination of Zhou,Zhang I, ZNAMENSKIY, Funka,Giner teaches U-Net, one of skill in the art of U-Nets can make Zhou’s, Funka’s,Giner’s of the combination of Zhou,Zhang I, ZNAMENSKIY, Funka,Giner be as Feng’s predictably recognizing the change “to realize…automatic and accurate…improved image segmentation”, Feng, pg. 2, 3rd txt blk.
Conclusion
The prior art “nearest to the subject matter defined in the claims” (MPEP 707.05) made of record and not relied upon is considered pertinent to applicant's disclosure.
The following table lists several references that are relevant to the subject matter claimed and disclosed in this Application. The references are not relied on by the Examiner, but are provided to assist the Applicant in responding to this Office action.
Citation
Relevance
Thomson et al. (US 2020/0090782 A1)
Thomson teaches [0104]:
“Inset highlights subpopulations clustered as macrophages, displaying tissue and cell type labels extracted from Tabula Muris annotations”
as the closest to the claimed “multiple cell type annotation labels” of claim 1.
MINN (US 2021/0269886 A1)
MINN teaches [0162]:
“Dimensionality reduction was performed using tSNE as implemented in the Rtsne R package and resulting clusters were annotated using the provided cell type labels.”
as the closest to the claimed “multiple cell type annotation labels” of claim 1.
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DENNIS ROSARIO whose telephone number is (571)272-7397. The examiner can normally be reached Monday-Friday, 9AM-5PM EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Henok Shiferaw can be reached at 571-272-4637. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/DENNIS ROSARIO/ Examiner, Art Unit 2676
/Henok Shiferaw/ Supervisory Patent Examiner, Art Unit 2676
1 BROAD CLAIM LANGUAGE: encoder: one that encodes (Merriam-Webster.com)
2 note that the claimed “model” does not equal/mean “neural network” (an additional element), as applicants pointed out in applicant’s remarks 1/12/2026 regarding Example 39 that claims a face neural network (an additional element) and “model” is nonexistent in Example 39, but “neural network” (not claimed in claim 1) equals/means “model”, wherein neural network is defined: Also called: neural net. an analogous network of electronic components, esp one in a computer designed to mimic the operation of the human brain, wherein mimic is defined: to imitate (a person, a manner, etc), esp for satirical effect; ape, wherein imitate is defined: to try to follow the manner, style, character, etc, of or take as a model (Dictionary.com): Thus limitations (Unet neural net) from applicant’s disclosure is not read in the claimed “model”.
3 note that limitation from applicant’s disclosure are not read into claim “model”.
4 a deep learning parameter is still a parameter at the end of the day: wherein parameter is defined: Mathematics. a (deep learning) constant or (deep learning) variable term in a (deep learning) function that determines the specific (deep learning) form of the (deep learning) function but not its general (deep learning) nature, as a in f (x ) = ax, where a determines only the (deep learning) slope of the (deep learning) line described by f (x ).
5 execute: Computers. to run (a program or routine) or carry out (an instruction in a program). (Dictionary.com)
6 implement: Computers. to realize or instantiate (an element in a program), often under certain conditions as specified by the software involved. (Dictionary.com)
7 processing: Computers. the act of carrying out operations on data or programs. (Dictionary.com)
8 architecture: the internal organization of a computer's components with particular reference to the way in which data is transmitted, wherein computer is defined: a programmable electronic device designed to accept data, perform prescribed mathematical and logical operations at high speed, and display the results of these operations. Mainframes, desktop and laptop computers, tablets, and smartphones are some of the different types of computers. (Dictionary.com)
9 BROAD CLAIM LANGUAGE: -ing (of “training” “processing”): a suffix of nouns formed from verbs, expressing the action of the verb (train or process) or its result, product, material, etc. (the art of building; a new building; cotton wadding ). (Dictionary.com)
10 express: to put (thought) into words; utter or state. (Dictionary.com)
11 colon: the sign (:) used to mark a major division in a sentence, to indicate that what follows is an elaboration, summation, implication, etc., of what precedes (“computer-implemented”); or to separate groups of numbers referring to different things, as hours from minutes in 5:30; or the members of a ratio or proportion, as in 1 : 2 = 3 : 6. (Dictionary.com): I don’t see this explicitly happening regarding the claimed “computer-implemented”.
12 update: to bring (a book, figures, or the like) up to date as by adding new information or making corrections, wherein correction is defined: a quantity applied or other adjustment made in order to increase accuracy, as in the use of an instrument or the solution of a problem. (Dictionary.com)
13 neural network: Also called neural net. Computers., a hardware or software system in which weighted connections between data nodes are refined to produce increasingly accurate results in information processing, as in pattern recognition or problem solving, with the goal of algorithmic computing that requires minimal human intervention. (Dictionary.com)
14 BROAD CLAIM LANGUAGE: multiple: consisting of, having, or involving several or many individuals, parts, elements, relations, etc. (Dictionary.com)
15 coordinate adjective (see Norquist (Coordinate Adjectives: Definition and Examples))
16 coordinate adjective (see Norquist (Coordinate Adjectives: Definition and Examples))
17 coordinate adjective (see Norquist (Coordinate Adjectives: Definition and Examples))
18 represent: to present in words; set forth; describe; state, wherein describe is defined: to pronounce, as by a designating term, phrase, or the like; label. (Dictionary.com)
19 Claim scope: “based” (on different random integers) is a past participle participating with/contributing to the action of “randomly selected” and/or “cropped” and/or “using an origin”
20 THE CLAIMED INVENTION AS A WHOLE regarding “cell type annotation”:
The blurry problem is via applicant’s disclosure:
[0088] An example image 432 is illustrated in Fig. 4, where the cells at the center 434 of the image 432 are sharp and focused, while the cells at the edge 436 of the image 432 are blurry and out of focus. In other images, the focus area may be on the edge cells while the center cells are blurry, etc. Each of the images having different focusing areas may be stacked into a tensor (N, W, H,), where N denotes a different focus area for the image. For example, in the method 201, receiving the training image may include receiving circumsolar anomaly training images, at 219.
The solution is:
[0088] An example image 432 is illustrated in Fig. 4, where the cells at the center 434 of the image 432 are sharp and focused, while the cells at the edge 436 of the image 432 are blurry and out of focus. In other images, the focus area may be on the edge cells while the center cells are blurry, etc. Each of the images having different focusing areas may be stacked into a tensor (N, W, H,), where N denotes a different focus area for the image. For example, in the method 201, receiving the training image may include receiving circumsolar anomaly training images, at 219.
I don’t see in claim 1 “Each of the images having different focusing areas may be stacked into a tensor (N, W, H,)”. This absence of applicant’s solution is an indication of obviousness. The “cell type annotation” at [0095] does not make a clear appearance in the focusing solution.
21 italics represent claim limitations already taught
22 coordinate adjective: implies an implicit [Markush element] of (Markush alternatives).
23 “type” is coordinate adjective: implies an implicit [Markush element] of (Markush alternatives).
24 coordinate adjective: implies an implicit [Markush element] of (Markush alternatives).
25 Claim scope: “based” (on different random integers) is a past participle participating with/contributing to the action of “randomly selected” and/or “cropped” and/or “using an origin”
26 coordinate: maths any of a set of numbers that defines the location of a point in space, wherein number is defined: a concept of quantity that is or can be derived from a single unit, the sum of a collection of units, or zero. Every number occupies a unique position in a sequence, enabling it to be used in counting. It can be assigned to one or more sets that can be arranged in a hierarchical classification: every number is a complex number ; a complex number is either an imaginary number or a real number , and the latter can be a rational number or an irrational number ; a rational number is either an integer (100) or a fraction , while an irrational number can be a transcendental number or an algebraic number See complex number imaginary number real number rational number irrational number integer fraction transcendental number algebraic number See also cardinal number ordinal number (Dictionary.com)
27 Claim scope: “based” (on different random integers) is a past participle participating with/contributing to the claimed action of “randomly selected” and/or “cropped” and/or “using an origin”
28 loss: destruction or ruin (Dictionary.com)
29 italics represent claim limitations already taught
30 BROAD CLAIM LANGUAGE: multiple: consisting of, having, or involving several or many individuals, parts, elements, relations, etc. (Dictionary.com)
31 cumulative adjective: “multiple” modifies cell or type or annotation each of which (e.g., “multiple annotation”) modifies “labels”: multiple annotation labels
32 “cell” is coordinate adjective: implies an implicit Markush element of Markush alternatives: A & B & C, wherein coordinate is defined: Grammar. of the same rank in grammatical construction, as Jack and Jill in the phrase Jack and Jill [ Jack and Jill went up the hill, To fetch a pail of water; Jack fell down, and broke his crown, And Jill came tumbling after.] , or got up and shook hands in the sentence He got up and shook hands, where rank is defined: [same] relative position or standing [up the hill or to “labels” relative to other coordinate adjectives] (Dictionary.com)
33 and: (used to connect grammatically coordinate words, phrases, or clauses) along or together with; as well as; in addition to; besides; also; moreover. (Dictionary.com)
34 and: (used to connect [Markush] alternatives). (Dictionary.com)
35 “type” is coordinate adjective: implies an implicit Markush element: [(A) & (B) & (C)]
36 :annotation” is a coordinate adjective: implies an implicit Markush element: A & B & C
37 Regarding “multiple cell type annotation labels” via applicant’s disclosure:
--[0146] The foregoing description of the embodiments has been provided for purposes of illustration and description. It is not intended to be exhaustive or to limit the disclosure. Individual elements, intended or stated uses, or features of a particular embodiment are generally not limited to that particular embodiment, but, where applicable, are interchangeable and can be used in a selected embodiment, even if not specifically shown or described. The same may also be varied in many ways. Such variations are not to be regarded as a departure from the disclosure, and all such modifications are intended to be included within the scope of the disclosure.—wherein scope is defined: Linguistics, Logic. the range of words (one of which is “labels”) or elements (one of which is “labels”) of an expression (claim 1) over which (i.e., “labels”) a modifier (a patent examiner) or operator (or me) has control (of the Markush alternatives A=”cell”,B=“type”,C=”annotation” each modifying “labels”). (Dictionary.com)
38 italics represent claim limitations already taught
39 BROAD CLAIM LANGUAGE: multiple: consisting of, having, or involving several or many individuals, parts, elements, relations, etc. (Dictionary.com)
40 cumulative adjective: “multiple” modifies cell or type or annotation each of which (e.g., “multiple annotation”) modifies “labels”: multiple annotation labels
41 “cell” is coordinate adjective: implies an implicit Markush element of Markush alternatives: A & B & C, wherein coordinate is defined: Grammar. of the same rank in grammatical construction, as Jack and Jill in the phrase Jack and Jill [ Jack and Jill went up the hill, To fetch a pail of water; Jack fell down, and broke his crown, And Jill came tumbling after.] , or got up and shook hands in the sentence He got up and shook hands, where rank is defined: [same] relative position or standing [up the hill or to “labels” relative to other coordinate adjectives] (Dictionary.com)
42 and: (used to connect grammatically coordinate words, phrases, or clauses) along or together with; as well as; in addition to; besides; also; moreover. (Dictionary.com)
43 and: (used to connect [Markush] alternatives). (Dictionary.com)
44 “type” is coordinate adjective: implies an implicit Markush element: [(A) & (B) & (C)]
45 :annotation” is a coordinate adjective: implies an implicit Markush element: A & B & C
46 Regarding “multiple cell type annotation labels” via applicant’s disclosure:
--[0146] The foregoing description of the embodiments has been provided for purposes of illustration and description. It is not intended to be exhaustive or to limit the disclosure. Individual elements, intended or stated uses, or features of a particular embodiment are generally not limited to that particular embodiment, but, where applicable, are interchangeable and can be used in a selected embodiment, even if not specifically shown or described. The same may also be varied in many ways. Such variations are not to be regarded as a departure from the disclosure, and all such modifications are intended to be included within the scope of the disclosure.—wherein scope is defined: Linguistics, Logic. the range of words (one of which is “labels”) or elements (one of which is “labels”) of an expression (claim 1) over which (i.e., “labels”) a modifier (a patent examiner) or operator (or me) has control (of the Markush alternatives A,B,C). (Dictionary.com)
47 feedback: the furnishing of data concerning the operation or output of a machine to an automatic control device or to the machine itself, so that subsequent or ongoing operations of the machine can be altered or corrected. (Dictionary.com)
48 alter: to make different in some particular, as size, style, course, or the like; modify, wherein modify is defined: to change somewhat the form or qualities of; alter partially; amend, wherein amend is defined: to change for the better; improve. (Dictionary.com)
49 Markush element follows: [(A) or (B)]
50 update: Computers., to incorporate new or more accurate information in (a database, program, procedure, etc.), wherein accurate is defined: free from error or defect; consistent with a standard, rule, or model; precise; exact, wherein exact is defined: strictly accurate or correct, wherein correct is defined: to set or make true, accurate, or right; remove the errors or faults from, wherein right is defined: in accordance with what is good, proper, or just, wherein good is defined: sufficient or ample (Dictionary.com)
51 class: a number of persons or things regarded as forming a group by reason of common attributes, characteristics, qualities, or traits; kind; sort, wherein number is defined: the sum, total, count, or aggregate of a collection of people or things. (Dictionary.com)
52 enhancement: the state or quality of being elevated, heightened, or increased, as in quality, degree, intensity, or value. (Dictionary.com)
53 circumsolar-- directed, traveling, etc., around the sun (Dictionary.com)
54 circumsolar-- directed, traveling, etc., around the sun (Dictionary.com)
55 circumsolar-- directed, traveling, etc., around the sun (Dictionary.com)
56 Markush elements follow: [A,B OR C] AND [D,E OR F]
57 fibroblasts: a cell that contributes to the formation of connective tissue fibers. (Dictionary.com)
58 lymphocyte: a type of white blood cell formed in lymphoid tissue (Dictionary.com)
59 dimension: Any one of the three physical or spatial properties of length, area, and volume. In geometry, a point is said to have zero dimension; a figure having only length, such as a line, has one dimension; a plane or surface, two dimensions; and a figure having volume, three dimensions. The fourth dimension is often said to be time, as in the theory of General Relativity. Higher dimensions can be dealt with mathematically but cannot be represented visually. (Dictionary.com)
60 Markush element [A,B,C or D] follows
61 Since Kerr teaches Markush alternative C), the Markush element [A,B,C or D] is taught.
62 Markush element follows: [A and/or B]
63 Markush element follows [(A),(B),(C or D), (E) and (F))
64 Since Zhang II teaches Markush alternatives A and D, the Markush element [(A,(B),(C and/or D),(E),(F)] is taught. Hence Markush alternatives B,C,E,F are also taught under the broadest reasonable interpretation of claim 15.
65 operations Computers., any discrete activity or action that is performed by a computer, as reading, writing, processing, sending, or receiving data. (Dictionary.com)
66 operations Computers., any discrete activity or action that is performed by a computer, as reading, writing, processing, sending, or receiving data. (Dictionary.com)
67 Markush element follows: A AND B
68 label: to put in a certain class; classify, wherein classify is defined: to assign a classification to (information, a document, etc.). (Dictionary.com)
69 label: to put in a certain class; classify, wherein classify is defined: to assign a classification to (information, a document, etc.). (Dictionary.com)
70 recognize: to perceive (a person, creature, or thing) to be the same as or belong to the same class as something previously seen or known; know again (Dictionary.com)
71 Since Moustafa teaches Markush alternative A, Moustafa teaches the Markush element: [A and B] and thus Moustafa teaches Markush alternative B.
72 Markush element follows: A,B,C,D and E
73 Since Zhang III teaches Markush alternative D, Zhang III teaches the Markush element [A,B,C,D,E] and hence teaches Markush alternatives A,B,C,E under the broadest reasonable interpretation of claim 18.
74 “segmentation” further limiting “mask”
75 “segmentation” further limiting “mask”
76 “segmentation” further limiting “mask”