DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Status
Claims 1-15 were pending for examination in the Application No. 18/031,729 filed April 13th, 2023. In the remarks and amendments received on October 31st, 2025, claims 1-2, 4-5, 10, 12, and 14-15 are amended and claims 6-7 are canceled. Accordingly, claims 1-5 and 8-15 are currently pending for examination in the application.
Response to Amendment
Applicant’s amendments filed October 31st, 2025, to the Claims have overcome each and every objection, 35 § U.S.C. 112 (b) rejection, and 35 U.S.C. § 101 rejection regarding non-statutory subject matter previously set forth in the Non-Final Office Action mailed August 6th, 2025. Accordingly, the objection(s), 35 § U.S.C. 112 (b) rejection(s), and 35 U.S.C. § 101 rejection(s) regarding non-statutory subject matter are withdrawn in response to the remarks and amendments filed. Examiner warmly thanks Applicant for considering the suggested amendments to be made to the disclosure.
Response to Arguments
Applicant’s arguments filed October 31st, 2025, regarding the rejection(s) of independent claim(s) have been fully considered but are moot because the arguments do not apply to the new combination of the references being used in the current rejection below. The arguments, which have not been rendered moot by the new combination of the references in light of Applicant’s newly submitted amendments, have been addressed below.
35 § U.S.C. 112(f) Interpretation(s)
The examiner appreciates Applicant’s remarks to traverse the claim term(s) being interpreted under 35 U.S.C. §112(f) not reciting “means” (or “step for”) as previously set forth in the Non-Final Office Action mailed August 6th, 2025. However, the examiner respectfully disagrees that the claim term(s) being interpreted under 35 U.S.C. §112(f) not reciting “means” (or “step for”) do not invoke 35 U.S.C. §112(f) as remarked by Applicant (pg. 6 of Applicant’s Remarks).
35 U.S.C. §112(f) is invoked for the claim term “device for” in claim 13 as listed in section “Claim Interpretations” of the current Office Action below because the term recites the non-functional generic placeholder “device for” (MPEP § 2181). This term is a generic placeholder preceding the functional limitation “observing said target particle in the sample”.
Allowable Subject Matter
Claims 2-5 and 11 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Priority (Previously Presented)
Acknowledgment is made of applicant’s status as a U.S. National Stage Filing under 35 U.S.C. § 371 of International Application No. PCT/FR2021/051818, filed on October 19th, 2021, which claims priority to French (FR) Patent Application No. 2010740, filed on October 20th, 2020.
Acknowledgment is made of applicant’s claim for foreign priority under 35 U.S.C. 119 (a)-(d). The certified copy has been filed as to French (FR) Patent Application No. 2010740, filed on October 20th, 2020.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on December 3rd, 2025, is in compliance with the provisions of 37 CFR 1.97. Accordingly, the IDS is being considered and attached by the examiner.
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that use the word “means” and are, thus, being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. This application further includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier, as explained in MPEP § 2181, subsection I (note that the list of generic placeholders below is not exhaustive, and other generic placeholders may invoke 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph):
A. The Claim Limitation Uses the Term "Means" or "Step" or a Generic Placeholder (A Term That Is Simply A Substitute for "Means")
With respect to the first prong of this analysis, a claim element that does not include the term "means" or "step" triggers a rebuttable presumption that 35 U.S.C. 112(f) does not apply. When the claim limitation does not use the term "means," examiners should determine whether the presumption that 35 U.S.C. 112(f) does not apply is overcome. The presumption may be overcome if the claim limitation uses a generic placeholder (a term that is simply a substitute for the term "means"). The following is a list of non-structural generic placeholders that may invoke 35 U.S.C. 112(f): "mechanism for," "module for," "device for," "unit for," "component for," "element for," "member for," "apparatus for," "machine for," or "system for." Welker Bearing Co., v. PHD, Inc., 550 F.3d 1090, 1096, 89 USPQ2d 1289, 1293-94 (Fed. Cir. 2008); Mass. Inst. of Tech. v. Abacus Software, 462 F.3d 1344, 1354, 80 USPQ2d 1225, 1228 (Fed. Cir. 2006); Personalized Media, 161 F.3d at 704, 48 USPQ2d at 1886–87; Mas-Hamilton Group v. LaGard, Inc., 156 F.3d 1206, 1214-1215, 48 USPQ2d 1010, 1017 (Fed. Cir. 1998). Note that there is no fixed list of generic placeholders that always result in 35 U.S.C. 112(f) interpretation, and likewise there is no fixed list of words that always avoid 35 U.S.C. 112(f) interpretation. Every case will turn on its own unique set of facts.
Such claim limitation(s) is/are:
"data-processing means are configured to implement" in claim 12 implemented on hardware disclosed in paras. [0049] (e.g., "the client 2 is a mass- market piece of equipment, in particular a desktop computer, a laptop computer, etc."); and
"device for observing…" in claim 13 implemented on hardware disclosed in paras. [0057] (e.g., "Preferably, the device comprises an optical system 23 consisting, for example, of a microscope objective and of a tube lens…").
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Additional Claim Interpretations:
Regarding claim(s) 2, the claim recites the phrase "in a uniform manner". This phrase is a term of degree. The claims provides a standard for ascertaining this term of degree as being “centered on and aligned in a predetermined direction” . Therefore, for examination purposes, the term of degree “in a uniform manner” which modifies the claim limitation of representing a "target particle" will be interpreted as a "target particle… centered on and aligned in a predetermined direction" as required by the claim.
Regarding claim(s) 3, the phrase “so as to represent said particle in said uniform manner” is merely an intended use/result limitation and not a functional or structural requirement of the claim. Therefore, this phrase will be interpreted as reciting intended use/result as a result of the functional requirement of the claim “extracting said input image from an overall image of the sample”.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1 and 12-15 are rejected under 35 U.S.C. 103 as being unpatentable over Wiliem et al. (Wiliem; “Automatic Classification of Human Epithelial Type 2 Cell Indirect Immunofluorescence Images using Cell Pyramid Matching,” 2014) in view of Aharon et al. (Aharon; “K-SVD: An Algorithm for Designing Overcomplete Dictionaries for Sparse Representation,” 2006).
Regarding claim 1, Wiliem discloses a method for classifying at least one input image representing a target particle in a sample (1st para. of pg. 3 and 1st para. of section 2 on pg. 4, recite(s)
[1st para. of pg. 3] “…Specifically, each image is processed as a pyramid of levels, with each level containing non-overlapping regions. The levels differ from each other through an increasing number of regions. Each region is divided into small image patches, and an average histogram of visual words is computed for each region. The histograms from all regions are then fed into a Support Vector Machine (SVM) classifier [32] that uses a specialised kernel.”
[2. Hep-2 Cell Classification System] “Each positive HEp-2 cell image is represented as a three-tuple
(
I
,
M
,
δ
)
which consists of: (i) the Fluorescein Isothiocyanate (FITC) image channel
I
; (ii) a binary cell mask image
M
which can be manually defined, or extracted from the (DAPI) image channel [15]; and (iii) the fluorescence intensity δ ∈ {strong, weak} which specifies whether the cell is a strong positive or weak positive. Strong positive images normally have more defined details, while weak positive images are duller.”
, where the “Hep-2 cell image” is an input image representing a target particle (e.g., “Hep-2 cell”) in a sample), the method being characterized in that it comprises implementation, by data-processing means of a client, (abstract, recite(s)
[abstract] “This paper describes a novel system for automatic classification of images obtained from Anti-Nuclear Antibody (ANA) pathology tests on Human Epithelial type 2 (HEp-2) cells using the Indirect Immunofluorescence (IIF) protocol. The IIF protocol on HEp-2 cells has been the hallmark method to identify the presence of ANAs, due to its high sensitivity and the large range of antigens that can be detected. However, it su ers from numerous shortcomings, such as being subjective as well as time and labour intensive. Computer Aided Diagnostic (CAD) systems have been developed to address these problems, which automatically classify a HEp-2 cell image into one of its known patterns (eg. speckled, homogeneous). Most of the existing CAD systems use handpicked features to represent a HEp-2 cell image, which may only work in limited scenarios. We propose a novel automatic cell image classification method termed Cell Pyramid Matching (CPM), which is comprised of regional histograms of visual words coupled with the Multiple Kernel Learning framework. We present a study of several variations of generating histograms and show the e cacy of the system on two publicly available datasets: the ICPR HEp-2 cell classification contest dataset and the SNPHEp-2 dataset.”
, where the proposed “automatic cell image classification method” is a computer aided diagnostic (CAD) system is the method being implemented by data-processing means of at least the client of a computer) of steps of:
(b) extracting a feature vector of features of said target particle, said features being numerical coefficients each associated with one elementary image of a set of elementary images each representing a reference particle such that a linear combination of said elementary images weighted by said coefficients approximates the representation of said target particle in the input image (last para. of pg. 5 under section 3.1, section 3.2 on pg. 6, section 3.2.3 on pg. 6, 3rd para. of section 4.1 on pg. 9, and Fig. 2 on pg. 2, recite(s)
[last para. of pg. 5 under section 3.1] “… The dictionary of visual words, denoted as D, is trained from patches extracted in sliding window manner from training cell images. Each histogram encoding method has specific dictionary training procedure.”
[3.2. Generation of Local Histograms] “For each patch-level feature that belongs to region
r
,
x
j
∈
X
r
,
a local histogram
h
j
is obtained. In this work we consider three prominent histogram encoding methods: (1) vector quantisation; (2) soft assignment; (3) sparse coding. The methods are elucidated below.”
PNG
media_image1.png
381
904
media_image1.png
Greyscale
[4.1. Datasets: ICPRContest and SNP HEp-2] “There are 1,884 cell images extracted from 40 specimen images. The specimen images are divided into training and testing sets with 20 images each (4 images for each pattern). In total there are 905 and 979 cell images extracted for training and testing. Five-fold validations of training and testing were created by randomly selecting the training and test images. Both training and testing in each fold contain around 900 cell images (approx. 450 cell images each). Examples are shown in Fig. 2.”
PNG
media_image2.png
636
901
media_image2.png
Greyscale
, where a “vector of weights
ϑ
” is a feature vector of features (i.e., “patch-level feature[s]”) of said particle being numerical coefficients (i.e., “
ϑ
” is a coefficient of “dictionary
D
”) each associated with one elementary image (i.e., an image “patch”; where each patch is represented as a “combination of dictionary atoms” comprising of numerical coefficients being “computed for each
x
j
” of an “image patch” is each coefficient being associated with one elementary image as each patch level feature
x
j
is associated with one elementary image training image patch) of a set of elementary images (i.e., “cell images extracted for training”) each representing a reference particle (i.e., different classes of Hep-2 cells as depicted in Fig. 2 above) such that a linear combination of said elementary images weighted by said coefficients (i.e., “
D
ϑ
” is a linear combination of said elementary images weighted by said coefficients) approximates the representation of said target particle in the input image (i.e., “represent each patch as a combination of dictionary atoms”));
(c) classifying said input image depending on said extracted feature vector (1st para. of pg. 3—see citation in the first limitation of claim 1 above—, where the first 3 paras. of pg. 7 recite:
PNG
media_image3.png
396
906
media_image3.png
Greyscale
, where classifying the input image (i.e., cell image) using “histograms” constructed based on said extracted feature vector (i.e., “
ϑ
”) into a “Support Vector Machine (SVM) classifier” is classifying said input image depending on at least said extracted feature vector); and
unsupervised learning, using a database of training images of particles in said sample, of the elementary images, wherein the learned elementary images are those that allow(section 3.2.3 on pg. 6—see citation in step b of the current claim above—, where the “dictionary
D
is trained by using the K-SVD algorithm” is unsupervised learning the elementary images (i.e., “dictionary atoms”) using a database of training images (e.g., “cell images extracted for training” as disclosed in 3rd para. of section 4.1 on pg. 9—see citation in step b of the current claim above); wherein the learned elementary images (i.e., “each patch [represented] as a combination of dictionary atoms”) are those that allow approximation of the representations of the particles in the training images by a combination of said elementary images (i.e., “combination of dictionary atoms”)).
Where Wiliem does not specifically disclose
unsupervised learning… wherein the learned elementary images are those that allow the best approximation of the representations of the particles in the training images by a linear combination of said elementary images;
Aharon teaches in the same field of endeavor of training a sparse coded dictionary using K-SVD algorithm
unsupervised learning… wherein the learned elementary images are those that allow the best approximation of the representations of the particles in the training images by a linear combination of said elementary images (sections I.A on pg. 4311 and I.C on pg. 4312, recite(s)
PNG
media_image4.png
259
507
media_image4.png
Greyscale
PNG
media_image5.png
462
515
media_image5.png
Greyscale
, where “dictionary atoms” are those that allow “the best possible representations for each member” in a training set by representing the “dictionary atoms” as a “sparse linear combination of these atoms” is unsupervised learning wherein the learned elementary images (i.e., “dictionary atoms”) are those that allow the best approximation of the representations of the particles in the training images (i.e., the best possible representations for each member” in a training set) by a linear combination of said elementary images (i.e., “sparse linear combination of these atoms”)).
A person of ordinary skill in the art before the effective filing date of the claimed invention would have recognized that the K-SVD algorithm used to train the dictionary of Wiliem comprising of elementary images as dictionary atoms as disclosed above is an unsupervised learning algorithm of the elementary images, wherein the learned elementary images are those that allow the best approximation of the representations of the particles in the training images by a linear combination of said elementary images as Aharon teaches the K-SVD algorithm is an unsupervised learning algorithm that allow the best possible representations for each member in a training set by representing the dictionary atoms as a sparse linear combination of these atoms, where the elementary images of Wiliem (e.g., cell images extracted for training) are the dictionary atoms in the dictionary of Wiliem.
Regarding claim 12 the claim differs from claim 1 in that the claim is in the form of a system. Therefore, claim 12 recites similar limitations to claim 1 and is rejected for similar rationale and reasoning (see the analysis for claim 1 above).
Regarding claim 13, Wiliem in view of Aharon discloses the system as claimed in claim 12, wherein Wiliem further discloses further comprising a device for observing said target particle in the sample (section 4.1 on pg. 9, recite(s)
[2. Hep-2 Cell Classification System] “The ICPR HEp-2 Cell Classification Contest (ICPRContest) Dataset [11] contains 1,457 cells extracted from 28 specimen images2. It contains six patterns: centromere, coarse speckled, cytoplasmic, fine speckled, homogeneous, and nucleolar. Each specimen image was acquired by means of fluorescence microscope (40-fold magnification) coupled with 50W mercury vapour lamp and with a CCD camera. …
…The SNP HEp-2 Cell (SNPHEp-2) Dataset3 [41] was obtained between January and February 2012 at Sullivan Nicolaides Pathology laboratory, Australia. This dataset has five patterns: centromere, coarse speckled, fine speckled, homogeneous and nucleolar. …Each specimen image was captured using a monochrome high dynamic range cooled microscopy camera, which was fitted on a microscope with a plan-Apochromat 20x/0.8 objective lens and an LED illumination source.”
, where a “microscope” is a device for observing said particle (e.g., a “HEp-2 cell”) in the sample (e.g., specimen)).
Regarding claim 14, Wiliem in view of Aharon discloses a non-transitory computer-readable medium storing instructions that, when executed by a computer cause the computer to execute a method as claimed in claim 1 for classifying at least one input image representing a target particle in a sample (Wiliem in view of Aharon discloses the method as claimed in claim 1—see the rejection of claim 1 above—; where Wiliem further discloses executing the method of claim 1 on a computer (i.e., abstract—see citation in claim 1 limitation “…data-processing means of a client…” above—, where the method is a “CAD” (computer aided) system is executing the method as a computer program product on at least a “computer”); wherein a person of ordinary skill in the art before the effective filing date of the claimed invention would have recognized that a “computer” comprises at least a non-transitory computer-readable medium (e.g., memory) storing instructions that when executed by the computer cause the computer to execute said method).
Regarding claim 15, Wiliem in view of Aharon discloses a non-transitory storage medium readable by a piece of computer equipment, on which a computer program product comprises code instructions for executing a method as claimed in claim 1 for classifying at least one input image representing a target particle in a sample (Wiliem in view of Aharon discloses the method as claimed in claim 1—see the rejection of claim 1 above—; where Wiliem further discloses executing the method of claim 1 on a computer (i.e., abstract—see citation in claim 1 limitation “…data-processing means of a client…” above—, where the method is a “CAD” (computer aided) system is executing the method as a computer program product on at least a “computer”); wherein a person of ordinary skill in the art before the effective filing date of the claimed invention would have recognized that a “computer” comprises at least a non-transitory storage medium (e.g., memory) readable by a piece of computer equipment on the computer to execute said method).
Claims 8-9 are rejected under 35 U.S.C. 103 as being unpatentable over Wiliem in view of Aharon as applied to claim 1 above, and further in view of Pezzillo et al. (Pezzillo; US 2019/0370686 A1).
Regarding claim 8, Wiliem in view of Aharon discloses the method as claimed in claim 1, wherein Wiliem further discloses step (c) is implemented by means of a classifier (1st para. of pg. 3—see citation in the first limitation of claim 1 above—, where the “Support Vector Machine (SVM) classifier” is a classifier), the method comprising a step (a0) of training(1st para. of section 4.2 on pg. 10, section 3.5 on pgs. 8-9, and section 3.4.2 on pgs. 7-8, recite(s)
[4.2. Combinations of Local Features, Histogram Generation and Spatial Structures] “We follow Lazebnik et al. [18] and Wiliem et al. [41] for SPM and DR implementations, respectively. The SVM classifier is used in all cases, with the kernels specified in Eqns. (7) and (10) for the SPM and DR methods, respectively. As noted in Section 3.4.3, a form of Eqn. (7) is used as the SVM kernel for the CPM method.”
PNG
media_image6.png
435
812
media_image6.png
Greyscale
PNG
media_image7.png
665
834
media_image7.png
Greyscale
, where the classifier (i.e., “SVM”) comprises of multiple kernel learning including parameters in kernels such as a “Dual Region (DR)” kernel (e.g., “good settings for the
τ
1
,
τ
2
, and
α
[
i
]
parameters”) is training parameters of said classifier using a training database of already classified feature vectors/matrices of particles in a sample (e.g., “training set” with “feature vector[s] and [its] corresponding ground truth labels” obtained from the training set of “cell images extracted for training and testing” as disclosed in section 4.1 on pg. 9—see citation in step b of current claim above)).
Where Wiliem in view of Aharon does not specifically disclose
…the method comprising a step (a0) of training, by data-processing means of a server,…;
Pezzillo teaches in the same field of endeavor of training machine learning models
…the method comprising a step (a0) of training, by data-processing means of a server,… (para(s). [0015], [0018], [0030], and [0088], recite(s)
[0015] “Edge computing devices implemented with processing/computing capabilities, such as mobile devices, desktops, laptops, tablets, internet of things (IoT) devices, medical equipment, industrial equipment, automobiles and other vehicles, robots, drones, etc., may execute applications that include artificial intelligence/machine learning models (hereinafter referred to as “models” or “ML models”)…”
[0018] “FIG. 1 illustrates an example system 100 for machine learning at edge computing devices 102, 104, and 106 based on distributed feedback. A machine learning (ML) model manager 108 executes as a cloud-based service, although other implementations of an ML model manager 108 can execute in an on-premise server, a private data center, or another computing system that is communicatively coupled to edge computing devices via a communications network 110.”
[0030] “Accordingly, through execution of the applications and the ML models on multiple edge computing devices 102, 104, and 106, additional labeled observations may be developed by each of the edge computing devices and fed back to the ML model manager 108 via the communications network 110. Using these new labeled observations, the ML model manager 108 can re-train (e.g., overwrite or update the training of) the one or more ML models provided to the applications at the edge computing devices 102, 104, and 106. Based on policies and/or user or vendor instructions, the re-trained ML models can be re-deployed out to the edge computing device 102, 104, and 106 in an effort to improve application execution results, efficiency, etc. The previous set of ML models on an edge computing device are replaced with the corresponding updated set of new ML models for use by the application 105 on the edge computing device. Such re-training, re-deployment, execution, and feedback can repeat over time during the lifecycle of the application 105.”
[0088] “The computing system can include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. In some embodiments, a server transmits data (e.g., an HTML page) to a client device (e.g., for purposes of displaying data to and receiving user input from a user interacting with the client device). Data generated at the client device (e.g., a result of the user interaction) can be received from the client device at the server.”
, where training (e.g., “re-training”) “machine models” by a “ML Model Manager”—which is executed on a “server”—to deploy out to “edge computing devices” is training by data-processing means of a server).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the presently filed invention to modify the system of Wiliem in view of Aharon to incorporate training parameters of said classifier by data-processing means of a server to improve training the said classifier by increasing resources (e.g., training data) for training said classifier implemented by the data-processing means of a client (e.g., an edge computing device) as taught by Pezzillo (para(s). [0004], recite(s)
[0004] “The described technology provides machine learning model re-training based on distributed feedback received from a plurality of edge computing devices. A trained instance of a machine learning model is transmitted, via one or more communications networks, to the plurality of edge computing devices. Feedback data is collected, via the one or more communications networks, from the plurality of edge computing devices. The feedback data includes labeled observations generated by the execution of the trained instance of the machine learning model at the plurality of edge computing devices on unlabeled observations captured by the plurality of edge computing devices. A re-trained instance of the machine learning model is generated from the trained instance using the collected feedback data. The re-trained instance of the machine learning model is transmitted, via the one or more communications networks, to the plurality of edge computing devices.”
).
Regarding claim 9, Wiliem, as modified by Aharon and Pezzillo, discloses the method as claimed in claim 8, wherein Wiliem further discloses said classifier is chosen from a support vector machine, a k-nearest neighbor algorithm, or a convolutional neural network (1st para. of pg. 3—see citation in the first limitation of claim 1 above—, where the “Support Vector Machine (SVM) classifier” is at least a support vector machine).
Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over Wiliem in view of Aharon as applied to claim 1 above, and further in view of Song et al. (Song; US 2019/0108444 A1).
Regarding claim 10, Wiliem in view of Aharon discloses the method as claimed in claim 1, wherein Song teaches in the same field of endeavor of image classification models using sparse coding step (c) comprises reducing the number of variables of the feature vector, by means of the t-SNE algorithm (para(s). [0096], recite(s)
[0096] “In order to understand the behavior of the representations generated by different approaches, the t-SNE (T-distributed Stochastic Neighbor Embedding) algorithm can be used to obtain 2-D visualizations of the considered baselines and the described approaches. ”
, where a person of ordinary skill in the art before the effective filing date of the claimed invention would have recognized that a “t-SNE” algorithm obtaining “2-D visualizations of the considered baselines” of classification models is reducing the number of variables of the feature vector for data visualization).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the presently filed invention to modify the system of Wiliem in view of Aharon to incorporate a t-SNE algorithm for reducing the number of variables of the feature vector to better visualize the considered baselines (e.g., each elementary image in the set of elementary images) of the classification model as taught by Song above.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JULIA Z YAO whose telephone number is (571)272-2870. The examiner can normally be reached Monday - Friday (8:30AM - 5PM).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Emily Terrell can be reached at (571)270-3717. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/J.Z.Y./Examiner, Art Unit 2666
/EMILY C TERRELL/Supervisory Patent Examiner, Art Unit 2666