DETAILED ACTION
This action is in response to the application filed 07/28/2022. Claims 1, 4-12, and 15-17 are pending and have been examined.
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 01/09/2026 has been entered.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Specification
The disclosure is objected to because of the following informalities:
[0054] Equation 1 is fuzzy and very difficult to parse
[0055] The terms from Equation 1 are fuzzy and very difficult to parse
[0058] Equation 2 is fuzzy and very difficult to parse
[0060] The terms from Equation 2 are fuzzy and very difficult to parse
[0063] Equation 3 is fuzzy and very difficult to parse
[0064] Equation 4 is fuzzy and very difficult to parse
[0066] The terms from Equation 4 are fuzzy and very difficult to parse
[0069] Equation 5 is fuzzy and very difficult to parse
[0072] Equation 6 is fuzzy and very difficult to parse
[0071] “used to training” is improper grammar.
Appropriate correction is required.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1, 4-12, and 15-17 are rejected under 35 U.S.C. 101 because the claimed inventions are directed to non-statutory subject matter without significantly more.
Claim 1
Step 1: The claim recites “A method of controlling an electronic apparatus”, and is therefore directed to the statutory category of process
Step 2A Prong 1: The claim recites the following judicial exception(s)
reconstructing the source data included in the source domain: This can be performed as a mental process. One can merely add a value to a member of source data from some source domain.
generating target data by training the reconstructed source data based on the source data using a generative adversarial network (GAN): This can be performed as a mental process. One can merely designate the reconstructed source data as target data.
generating the target domain including the generated target data: This can be performed as a mental process. One can merely designate the set of all target data as the target domain.
wherein the generating of the target data comprises:
based on a category of a class for the reconstructed source data corresponding to the source data being matched based on a class including: the source data among a plurality of classes, identifying a first class loss value according to a preset method: This can be performed as a mental process. One can merely assign a loss value dependent on whether a classifier machine learning model that outputs reconstructed source data predicts a class in the reconstructed data present in the corresponding source data of the source domain.
based on the category of the class for the reconstructed source data not being matched, identifying a second class loss value according to the preset method to obtain a second class loss value, wherein the second class loss value is greater than the first class loss value: This can be performed as a mental process. One can merely assign a loss inversely proportional to how similar corresponding source and reconstructed source data is.
identifying a distance map based on a distance between feature vectors of source data included in different classes among a plurality of classes: This can be performed as a mental process. One can merely identify distances between vectors of source data in different classes.
obtaining a distance loss value so as to maintain a distance between the feature vectors of the reconstructed source data corresponding to the source data based on the identified distance map: This can be performed as a mental process. One can merely assign a loss value inversely proportional to the distance between each pair of target vectors for target vectors that have different classes in the corresponding source vectors.
Step 2A Prong 2: The judicial exception(s) are not integrated into a practical application through the following additional element(s)
receiving, via a communication interface included in the electronic apparatus, source data included in a source domain: This amounts to mere data reception and is insignificant extra-solution activity (MPEP 2106.05(g)).
generating target data by training the reconstructed source data based on the source data using a generative adversarial network (GAN): This is mere instruction to generate target data by training reconstructed source data based on source data in a generic manner using a generic neural network (MPEP 2106.05(f)).
applying the identified first class loss value, the identified second class loss value, and the obtained distance loss value to the reconstructed source data for training the GAN: This is mere instruction to train a neural network based on judicial exceptions in a generic manner (MPEP 2106.05(f)).
Step 2B: The following additional element(s) of the claim, taken alone or in combination, do not amount to significantly more than the recited judicial exception(s)
receiving, via a communication interface included in the electronic apparatus, source data included in a source domain: This is an instance of retrieving information from memory, a limitation known to be well-understood, routine, and conventional (MPEP 2106.05(d) II. iv.)
generating target data by training the reconstructed source data based on the source data using a generative adversarial network (GAN): This is mere instruction to generate target data by training reconstructed source data based on source data in a generic manner using a generic neural network (MPEP 2106.05(f)).
applying the identified first class loss value, the identified second class loss value, and the obtained distance loss value to the reconstructed source data for training the GAN: This is mere instruction to train a neural network based on judicial exceptions in a generic manner (MPEP 2106.05(f)).
Claim 4
Step 1: The claim recites a process, as in claim 1
Step 2A Prong 1: The claim recites the following further judicial exception(s)
identifying at least one loss value among a cluster loss value by cluster loss, a class activating mapping (CAM) loss value by CAM loss, or a feature loss value by feature loss: This can be performed as a mental process. One can merely observe or imagine any of these types of losses.
additionally applying at least one loss value among the identified cluster loss value, CAM loss value, or feature loss value to the reconstructed source data: This can be performed as a mental process. One can merely calculate a loss value from input reconstructed source data.
Step 2A Prong 2: The judicial exception(s) are not integrated into a practical application through the additional element(s)
Step 2B: The additional element(s) of the claim, taken alone or in combination, do not amount to significantly more than the recited judicial exception(s)
Claim 5
Step 1: The claim recites a process, as in claim 4
Step 2A Prong 1: The claim recites no further judicial exception(s)
Step 2A Prong 2: The judicial exception(s) are not integrated into a practical application through the further additional element(s)
wherein the generating of the target data further comprises obtaining a cluster loss value based on a preset method so that a distance of feature vectors of the reconstructed source data included in different classes among a plurality of classes is far apart: This constitutes mere reception of data and is insignificant extra-solution activity (MPEP 2106.05(g)).
Step 2B: The further additional element(s) of the claim, taken alone or in combination, do not amount to significantly more than the recited judicial exception(s)
wherein the generating of the target data further comprises obtaining a cluster loss value based on a preset method so that a distance of feature vectors of the reconstructed source data included in different classes among a plurality of classes is far apart: This is an instance of retrieving information from memory, a limitation known to be well-understood, routine, and conventional (MPEP 2106.05(d) II. iv.).
Claim 6
Step 1: The claim recites a process, as in claim 4
Step 2A Prong 1: The claim recites the following further judicial exception(s)
identifying a weight region of the source data to be applied when classifying a class of the source data by an artificial intelligence neural network model including the source domain: This can be performed as a mental process. One can merely identify a particular subset interval of each member of the source data.
obtaining a CAM loss value to set a weight region of the reconstructed source data corresponding to the identified source data: This can be performed as a mental process. One can merely subtract the interval of each member of the source data from the corresponding interval in the corresponding member of the reconstructed source data, and sum the difference values together.
Step 2A Prong 2: The judicial exception(s) are not integrated into a practical application through the further additional element(s)
identifying a weight region of the source data to be applied when classifying a class of the source data by an artificial intelligence neural network model including the source domain: This is mere instruction to apply a judicial exception to a generic data structure in a generic manner (MPEP 2106.05(f)).
Step 2B: The further additional element(s) of the claim, taken alone or in combination, do not amount to significantly more than the recited judicial exception(s)
identifying a weight region of the source data to be applied when classifying a class of the source data by an artificial intelligence neural network model including the source domain: This is mere instruction to apply a judicial exception to a generic data structure (MPEP 2106.05(f)).
Claim 7
Step 1: The claim recites a process, as in claim 6
Step 2A Prong 1: The claim recites the following further judicial exception(s)
wherein the weight region of the source data comprises at least one region of a specific region of image data or a specific frequency region of signal data: Identifying a weight region of the source data can still be performed as a mental process.
wherein the weight region of the reconstructed source data comprises at least one region of a specific region of image data or a specific frequency region of signal data: Setting a weight region of the reconstructed source data can still be performed as a mental process.
Step 2A Prong 2: The judicial exception(s) are not integrated into a practical application through the additional element(s)
Step 2B: The additional element(s) of the claim, taken alone or in combination, do not amount to significantly more than the recited judicial exception(s)
Claim 8
Step 1: The claim recites a process, as in claim 4
Step 2A Prong 1: The claim recites no further judicial exception(s)
Step 2A Prong 2: The judicial exception(s) are not integrated into a practical application through the further additional element(s)
obtaining the feature loss value so that a feature vector of the source data is the same as a feature vector of the reconstructed source data corresponding to the source data: This amounts to mere reception of data and is insignificant extra-solution activity (MPEP 2106.05(g)).
Step 2B: The further additional element(s) of the claim, taken alone or in combination, do not amount to significantly more than the recited judicial exception(s)
obtaining the feature loss value so that a feature vector of the source data is the same as a feature vector of the reconstructed source data corresponding to the source data: This is an instance of retrieving information from memory, a limitation known to be well-understood, routine, and conventional (MPEP 2106.05(d) II. iv.)
Claim 9
Step 1: The claim recites a process, as in claim 1
Step 2A Prong 1: The claim recites the following further judicial exception(s)
wherein the source domain is a domain generated in an artificial intelligence learning model of a first electronic apparatus: Reconstructing source data can still be performed as a mental process. One merely has to observe the output of some learning model and modify it mentally.
Step 2A Prong 2: The judicial exception(s) are not integrated into a practical application through the further additional element(s)
wherein the target domain is a domain for an artificial intelligence learning model of a second electronic apparatus: This is mere instruction to apply a judicial exception to a generic data structure in a generic manner (MPEP 2106.05(f)).
wherein the first electronic apparatus and the second electronic apparatus have at least one different hardware specification, software platform, or software version: This is mere instruction to apply judicial exceptions to generic computer hardware and / or component (MPEP 2106.05(f)).
Step 2B: The further additional element(s) of the claim, taken alone or in combination, do not amount to significantly more than the recited judicial exception(s)
wherein the target domain is a domain for an artificial intelligence learning model of a second electronic apparatus: This is mere instruction to apply a judicial exception to a generic data structure in a generic manner (MPEP 2106.05(f)).
wherein the first electronic apparatus and the second electronic apparatus have at least one different hardware specification, software platform, or software version: This is mere instruction to apply judicial exceptions to generic computer hardware and / or component (MPEP 2106.05(f)).
Claim 10
Step 1: The claim recites a process, as in claim 1
Step 2A Prong 1: The claim recites the following further judicial exception(s)
generating fake data: This can be performed as a mental process. One can merely think of constructed data points.
discriminating whether input data is the real data or the fake data: This can be performed as a mental process. One can merely use their best judgment to classify each data point they observe.
Step 2A Prong 2: The judicial exception(s) are not integrated into a practical application through the further additional element(s)
receiving a Gaussian distribution: This amounts to mere reception of data and is insignificant extra-solution activity (MPEP 2106.05(g)).
receiving trained real data and the generated fake data: This amounts to mere reception of data and is insignificant extra-solution activity (MPEP 2106.05(g)).
Step 2B: The further additional element(s) of the claim, taken alone or in combination, do not amount to significantly more than the recited judicial exception(s)
receiving a Gaussian distribution: This is an instance of retrieving information from memory, a limitation known to be well-understood, routine, and conventional (MPEP 2106.05(d) II. iv.).
receiving trained real data and the generated fake data: This is an instance of retrieving information from memory, a limitation known to be well-understood, routine, and conventional (MPEP 2106.05(d) II. iv.).
Claim 11
Step 1: The claim recites a process, as in claim 10
Step 2A Prong 1: The claim recites the following further judicial exception(s)
wherein the fake data is generated to be close to the real data: One can slightly modify observed data to generate fake data close to the real data. Thus, generating fake data is still a mental process.
Step 2A Prong 2: The judicial exception(s) are not integrated into a practical application through the additional element(s).
Step 2B: The additional element(s) of the claim, taken alone or in combination, do not amount to significantly more than the recited judicial exception(s).
Claim 12
Step 1: The claim recites “An electronic apparatus”, and is therefore directed to the statutory category of machine
Step 2A Prong 1: The claim recites the following judicial exception(s)
reconstruct source data included in the source domain: This can be performed as a mental process. One can merely add a value to a member of source data from some source domain.
generate a target domain by training the reconstructed source data based on the source data using a generative adversarial network (GAN): This can be performed as a mental process. One can merely designate the reconstructed source data as target domain data.
generate a target domain including the generated target data: This can be performed as a mental process. One can merely designate the set of all target data as the target domain.
wherein the processor is further configured to:
based on a category of a class for the reconstructed source data corresponding to the source data being matched based on a class including: the source data among a plurality of classes, identify a first class loss value according to a preset method: This can be performed as a mental process. One can merely assign a loss value dependent on whether a classifier machine learning model that outputs reconstructed source data predicts a class in the reconstructed data present in the corresponding source data of the source domain.
based on the category of the class for the reconstructed source data not being matched, identify a second class loss value according to the preset method to obtain a second class loss value, wherein the second class loss value is greater than the first class loss value: This can be performed as a mental process. One can merely assign a loss inversely proportional to how similar corresponding source and reconstructed source data is.
identify a distance map based on a distance between feature vectors of source data included in different classes among a plurality of classes: This can be performed as a mental process. One can merely identify distances between vectors of source data in different classes.
obtain a distance loss value so as to maintain a distance between the feature vectors of the reconstructed source data corresponding to the source data based on the identified distance map: This can be performed as a mental process. One can merely assign a loss value inversely proportional to the distance between each pair of target vectors for target vectors that have different classes in the corresponding source vectors.
Step 2A Prong 2: The judicial exception(s) are not integrated into a practical application through the following additional element(s)
a communication interface; and a processor: This is mere instruction to execute the recited judicial exceptions on generic computer hardware (MPEP 2106.05(f)).
receive source data, via the communication interface, included in a source domain: This amounts to mere data reception and is insignificant extra-solution activity (MPEP 2106.05(g)).
generate a target domain by training the reconstructed source data based on the source data using a generative adversarial network (GAN): This is mere instruction to generate target data by training reconstructed source data based on source data in a generic manner using a generic neural network (MPEP 2106.05(f)).
apply the identified first class loss value, the second class loss value, and the obtained distance loss value to the reconstructed source data for training the GAN: This is mere instruction to train a neural network based on judicial exceptions in a generic manner (MPEP 2106.05(f)).
Step 2B: The following additional element(s) of the claim, taken alone or in combination, do not amount to significantly more than the recited judicial exception(s)
a communication interface; and a processor: This is mere instruction to execute the recited judicial exceptions on generic computer hardware (MPEP 2106.05(f)).
receive source data, via the communication interface, included in a source domain: This is an instance of retrieving information from memory, a limitation known to be well-understood, routine, and conventional (MPEP 2106.05(d) II. iv.)
generate a target domain by training the reconstructed source data based on the source data using a generative adversarial network (GAN): This is mere instruction to generate target data by training reconstructed source data based on source data in a generic manner using a generic neural network (MPEP 2106.05(f)).
apply the identified first class loss value, the second class loss value, and the obtained distance loss value to the reconstructed source data for training the GAN: This is mere instruction to train a neural network based on judicial exceptions in a generic manner (MPEP 2106.05(f)).
Claim 15
Step 1: The claim recites a machine, as in claim 12.
Step 2A Prong 1: The claim recites no further judicial exception(s)
Step 2A Prong 2: The judicial exception(s) are not integrated into a practical application through the further additional element(s)
wherein the processor is further configured to obtain a cluster loss value based on a preset method so that a distance of feature vectors of the reconstructed source data included in different classes among a plurality of classes is far apart: This constitutes mere reception of data and is insignificant extra-solution activity (MPEP 2106.05(g)).
Step 2B: The further additional element(s) of the claim, taken alone or in combination, do not amount to significantly more than the recited judicial exception(s)
wherein the processor is further configured to obtain a cluster loss value based on a preset method so that a distance of feature vectors of the reconstructed source data included in different classes among a plurality of classes is far apart: This is an instance of retrieving information from memory, a limitation known to be well-understood, routine, and conventional (MPEP 2106.05(d) II. iv.).
Claims 16-17
Step 1: Claims 16-17 recite a machine, as in claim 12.
Step 2A Prong 1: Claims 16-17 recite the same judicial exception(s) as claims 4 & 6, respectively.
Step 2A Prong 2: The judicial exception(s) are not integrated into a practical application through any additional elements. The analysis of claims 16-17 at this step mirrors that of claims 4 & 6, respectively, with the exception that claims 16-17 are directed to “a communication interface; and a processor”, said processor performing operations mirroring those of claims 4 & 6. This is a mere instruction to apply the exceptions using generic computer equipment (MPEP 2106.05(f)).
Step 2B: The additional element(s) of the claim, taken alone or in combination, do not amount to significantly more than the recited judicial exception(s). The analysis of claims 16-17 at this step mirrors that of claims 4 & 6, with the exception that claims 16-17 are directed to “a communication interface; and a processor”, said processor performing operations mirroring those of claims 4 & 6. This is a mere instruction to apply the exceptions using generic computer equipment (MPEP 2106.05(f)).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claim(s) 1, 9-12, and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Yoo et al. (Pixel-Level Domain Transfer, 2016, arXiv:1603.07442v3), hereafter referred to as Yoo, in view of Abrol et al. (DOMAIN ADAPTATION USING POST-PROCESSING MODEL CORRECTION, filed 1/12/2020, US 2021/0312674 A1), hereafter referred to as Abrol, and further in view of Deng et al. (Rethinking Triplet Loss for Domain Adaptation, January 2021, IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. 31, NO. 1), hereafter referred to as Deng.
Regarding claim 1, Yoo teaches [a] method of controlling an electronic apparatus for generating a generative adversarial network (GAN)-based target domain, the method comprising:
reconstructing the source data included in the source domain: “To generate realistic target images, we employ the real/fake-discriminator as in Generative Adversarial Nets” (Yoo, page 1, Abstract); “We transfer a knowledge in a source domain (source data) to a pixel-level target image (reconstructed source data) while overcoming the semantic gap between the two domains” (Yoo, page 2, paragraph 2).
generating target data by training the reconstructed source data based on the source data using a generative adversarial network (GAN):
“we present a pixel-level domain converter composed of an encoder for semantic embedding of a source and a decoder to produce a target image (reconstructed source data)” (Yoo, page 2, paragraph 2)
“To train our converter, we first place a separate network named domain discriminator on top of the converter. The domain discriminator takes a pair of a source image (source data) and a target image (target data / reconstructed source data), and is trained to make a binary decision whether the input pair is associated or not. The domain discriminator then supervises the converter to produce associated images. Both of the networks are jointly optimized by the adversarial training method … Such binary supervision solves the problem of non-deterministic property of the target domain and enables us to train the semantic relation between the domains” (Yoo, page 2, paragraph 3).
PNG
media_image1.png
723
895
media_image1.png
Greyscale
”Whole architecture for pixel-level domain transfer” (Yoo, page 6, Fig. 2). As is evident from the descriptions of the system and the figure above, the converter and discriminators form a generative adversarial network, where discriminators are used as the ‘adversaries’ to train the converter to produce better domain transfers.
generating the target domain including the generated target data: “We take an image as a conditioned input lying in a domain (source domain), and re-draw a target image (generated target data) lying on another (target domain)“ (Yoo, page 1, paragraph 2).
… wherein the generating of the target data comprises:
based on a category of a class for the reconstructed source data corresponding to the source data being matched based on a class including: the source data among a plurality of classes, identifying a first class loss value according to a preset method; based on the category of the class for the reconstructed source data not being matched, identifying a second class loss value according to the preset method to obtain a second class loss value. wherein the second class loss value is greater than the first class loss value; obtaining a distance loss value … and applying the identified first class loss value, the identified second class loss value, and the obtained distance loss value to the reconstructed source data for training the GAN:
“The domain discriminator
D
A
is the lowest network … The network
D
A
takes a pair of source (source data) and target (reconstructed source data) as input, and produces a scalar probability of whether the input pair is associated or not. Let us assume that we have a source
I
S
, its ground truth target
I
T
(positive category of class) and an irrelevant target
I
T
-
(negative category of class). We also have an inference
I
^
T
(inferred class) from the converter C. We then define the loss
L
A
D
(class loss value) of the domain discriminator
D
A
as,
PNG
media_image2.png
175
1030
media_image2.png
Greyscale
” (Yoo, page 8, paragraph 1). This loss formula is a preset method.
“The domain discriminator (component of GAN) loss is minimized (train[ed]) for training the domain discriminator while it is maximized for training the converter. The better the domain discriminator distinguishes a ground-truth
I
T
and an inference
I
^
T
, the better the converter transfers the source into a relevant target” (Yoo, page 8, paragraph 2).
Examiner’s note: The domain discriminator loss’s value depends entirely on the classes of the two pieces of source and target (reconstructed source) data being compared. When the target data has the ground truth class of the source data (
I
=
I
T
) (class match), t := 1 and a higher similarity value from the discriminator results in a lower loss value (first class loss value) in the range of (-inf, 0]. When the target data has a different class than the source data (
I
=
I
T
-
) (class not matched), t:= 0 and a higher similarity value from the discriminator results in a higher loss value (second class loss value) in the range of [0, +inf).
Yoo relates to domain transfer with GANs and is analogous to the claimed invention.
While Yoo fails to disclose the further limitations of the claim, Abrol teaches a method of receiving, via a communication interface included in the electronic apparatus, source data included in a source domain:
“A user enters commands or information into the computer 1602 through input device(s) 1628. Input devices 1628 include, but are not limited to, a pointing device such as a mouse, trackball, stylus, touch pad, keyboard, microphone, joystick, game pad, satellite dish, scanner, TV tuner card, digital camera, digital video camera, web camera, and the like. These and other input devices connect (communicate with) to the processing unit 1604 through the system bus 1608 via interface port(s) 1630 (communication interface[s]). Interface port(s) 1630 include, for example, a serial port, a parallel port, a game port, and a universal serial bus (USB). Output device(s) 1636 use some of the same type of ports as input device( s) 1628. Thus, for example, a USB port can be used to provide input to computer 1602” (Abrol, [0119]).
“When the input image that is fed into the source domain model 108 is a target image, the resulting SD model inference output 118 (e.g., a segmentation mask) is denoted herein as
S
'
t
. When the input image is a source domain image (source data), the resulting SD model inference output ( e.g., a segmentation mask) is denoted herein as
S
'
s
” (Abrol, [0061]).
Abrol relates to domain transfer with machine learning and is analogous to the claimed invention. Yoo teaches a method of reconstructing source domain data as target domain data. The claimed invention improves upon this method by receiving source data through a communication interface. Abrol teaches receiving source data through a communication interface, applicable to Yoo. A person of ordinary skill in the art would have recognized that sending data received from Abrol’s interface through Yoo’s method would lead to the predictable result of running Yoo’s method on actual computer hardware and real input data, and would improve the known device by enabling the use of real data processing with the method (MPEP 2143 I. (D) Applying a known technique to a known device (method, or product) ready for improvement to yield predictable results).
While Abrol fails to disclose the further limitations of the claim, Deng discloses a method, wherein the generating of the target data comprises:
identifying a distance map based on a distance between feature vectors of source data included in different classes among a plurality of classes:
labeled source images (feature vectors of source data) are directly pulled (mapped) to the expected weight vectors by cross-entropy loss“ (Deng, page 29, Fig 1)
“SGC requires the images regardless of their domains to follow two types of semantic relations:
Semantic similarity. Images from the same class are semantically similar, thereby should be mapped nearby in the embedding space.
Semantic dissimilarity. Images from different classes are semantically dissimilar, thereby should be mapped far apart in the embedding space” (Deng, page 31, left column, paragraph 5). Source features are mapped to weight vectors of their corresponding classes, pushing features of the same class apart and features of the same class together.
obtaining a distance loss value so as to maintain a distance between the feature vectors of the reconstructed source data corresponding to the source data based on the identified distance map:
PNG
media_image3.png
515
1152
media_image3.png
Greyscale
“Illustration of SGC effect. The overall objective is to make source (source data) and target features (target data) surround their corresponding weight vectors in the classifier, so as to ensure accurate classification. To this end, SGC aims to pull target features to source features with the same class labels. Meanwhile, labeled source images are directly pulled to the expected weight vectors by cross-entropy loss. Thus, SGC leads to more desirable embeddings, and improves the accuracy on the target dataset. In this figure, different colors denote different classes. Weight vectors W1 and W2 are corresponding to class C1 and class C2, respectively” (Deng, page 29, Fig. 1); “As shown in Fig. 1, SGC pulls the target images (reconstructed source data) to the source images (source data) with the same class labels. This indirectly enforces target images to surround their corresponding weight vectors, and thus leads to accuracy improvement on the target images” (Deng, page 29, right column, paragraph 3); “In practice, SGC is implemented by using a triplet loss (distance loss) function. By minimizing the triplet loss, SGC reduces the distance between semantically similar images and increases that of semantically dissimilar images” (Yoo, page 2, left column, paragraph 2). Source features are mapped to the weight vectors of their classes directly, while reconstructed source features are mapped to the source features. Thus, reconstructed source value clustering is based on the mapping of the source features.
Deng relates to domain translation with machine learning and is analogous to the claimed invention. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the combination of Yoo and Abrol to cluster target features based on a source feature distance map, as disclosed by Deng. This enables closely related target data to be clustered together for more accurate classification, and is robust to dramatic accuracy costs that pseudo-labeled target data can cause. See Deng, page 29, right column, paragraph 2 and page 2, left column, paragraph 1.
Regarding claim 9, the rejection of claim 1 in view of Yoo, Abrol, and Deng is incorporated. Abrol further discloses a method,
wherein the source domain is a domain generated in an artificial intelligence learning model of a first electronic apparatus:
“The terms ‘source domain model’, ‘source model’ ‘source image processing model’, ‘source domain image processing model’ and the like are used herein interchangeably to refer to an imaging processing model (first electronic apparatus) trained on images from specific domain, referred to herein as the source domain. Images included in the source domain are referred to herein as ‘source domain images’ or ‘source images’” (Abrol, [0040]).
“the source domain model 108 can include AI/ML medical image processing model (artificial intelligence learning model)” (Abrol, [0046]).
“When the input image is a source domain image, the resulting SD model inference output (e.g., a segmentation mask) is denoted herein as
PNG
media_image4.png
92
106
media_image4.png
Greyscale
(source data)” (Abrol, [0061]).
wherein the target domain is a domain for an artificial intelligence learning model of a second electronic apparatus
“The terms ‘target domain model’, ‘target model’, ‘target image processing model’, ‘target domain image processing model’, and the like, are used herein interchangeably to refer to an imaging processing model (second electronic apparatus) configured to perform a same or similar image processing task as a corresponding source domain model, yet on images from a different but similar domain, referred to herein as the ‘target domain.’ Images included in the target domain are referred to herein as ‘target domain images’ or ‘target images’” (Abrol, [0040]).
“The target domain model (including the encoder network 306 and the decoder network 1404) can further be applied to the same contrast images” (Abrol, [0101]). The target domain model includes encoder and decoder networks.
“the encoder network 306 described with reference to FIG. 3 can be used in a GAN network architecture to adapt a source domain model for same or enhanced accuracy on the target domain for which the encoder network 306 was trained” (Abrol, [0100]). The encoder is an artificial intelligence learning model that can be trained.
wherein the first electronic apparatus and the second electronic apparatus have at least one different hardware specification, software platform, or software version:
“the domain adaptation module 1304 can generate a new target domain model 1306 (second electronic apparatus) using the encoder network 306 of the (trained) post-processing model 110 and the decoder of the source domain model 108. The image processing component 112 can further include a target domain application component 1308 that applies the target domain model 1306 to the target domain images 114 to generate an inference output 1310 that is more accurate for the target images relative to an inference output generated based on application of the source domain model 108 to the target images. In accordance with these embodiments, the encoder network can be used in a generative adversarial network (GAN) architecture to re-tune the source domain model for the target domain model” (Abrol, [0098]). The target domain model is constructed using parts of the post-processing model and source domain model. Thus, the source domain model and target domain models have unique network structures (software platform[s]).
Abrol relates to domain transfer using machine learning and is analogous to the claimed invention. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the combination of Yoo, Abrol, and Deng to use separately structured source and target models, as disclosed by Abrol. Doing so would allow a predictive model trained solely on source data to make inferences about target data from a different distribution, increasing the amount of applicable input data. See Abrol, [0006].
Regarding claim 10, the rejection of claim 1 in view of Yoo, Abrol, and Deng is incorporated. Yoo also teaches a method, wherein the generating of the target data further comprises:
receiving a Gaussian distribution: “The filters of the three networks are randomly initialized from a zero mean Gaussian distribution with a standard deviation of 0.02” (Yoo, page 10, paragraph 4).
generating fake data: “To train such a generator, a discriminator is introduced. The discriminator takes either a real image or a fake image drawn by the generator” (Yoo, page 4, paragraph 3).
wherein the method further comprises:
receiving trained real data and the generated fake data; discriminating whether input data is the real data or the fake data: “To train such a generator, a discriminator is introduced. The discriminator takes either a real image or a fake image drawn by the generator, and distinguishes whether its input is real or fake. The training procedure can be intuitively described as follows. Given an initialized generator
G
0
, an initial discriminator
D
R
0
is firstly trained with real training images {
I
i
} and fake images
{
I
^
j
=
G
0
(
z
j
)
}
drawn by the generator” (Yoo, page 4, paragraph 3).
Regarding claim 11, the rejection of claim 10 in view of Yoo, Abrol, and Deng is incorporated. Yoo also teaches a method, wherein the fake data is generated to be close to the real data: “The eventual goal of the generator is to map a small dimensional space Z to a pixel-level image space, i.e., to enable the generator to produce a realistic image from an input random vector
z
∈
Z
” (Yoo, page 4, paragraph 2).
Regarding claim 12, Yoo teaches [a]n electronic apparatus generating a generative adversarial network (GAN)-based target domain, the electronic apparatus … configured to:
reconstruct the source data included in the source domain: “To generate realistic target images, we employ the real/fake-discriminator as in Generative Adversarial Nets” (Yoo, page 1, Abstract); “We transfer a knowledge in a source domain (source data) to a pixel-level target image (reconstructed source data) while overcoming the semantic gap between the two domains” (Yoo, page 2, paragraph 2).
generate target data by training the reconstructed source data based on the source data using a generative adversarial network (GAN):
“we present a pixel-level domain converter composed of an encoder for semantic embedding of a source and a decoder to produce a target image (reconstructed source data)” (Yoo, page 2, paragraph 2)
“To train our converter, we first place a separate network named domain discriminator on top of the converter. The domain discriminator takes a pair of a source image (source data) and a target image (target data / reconstructed source data), and is trained to make a binary decision whether the input pair is associated or not. The domain discriminator then supervises the converter to produce associated images. Both of the networks are jointly optimized by the adversarial training method … Such binary supervision solves the problem of non-deterministic property of the target domain and enables us to train the semantic relation between the domains” (Yoo, page 2, paragraph 3).
PNG
media_image1.png
723
895
media_image1.png
Greyscale
”Whole architecture for pixel-level domain transfer” (Yoo, page 6, Fig. 2). As is evident from the descriptions of the system and the figure above, the converter and discriminators form a generative adversarial network, where discriminators are used as the ‘adversaries’ to train the converter to produce better domain transfers.
generate the target domain including the generated target data: “We take an image as a conditioned input lying in a domain (source domain), and re-draw a target image (generated target data) lying on another (target domain)“ (Yoo, page 1, paragraph 2).
… wherein the processor is further configured to:
based on a category of a class for the reconstructed source data corresponding to the source data being matched based on a class including: the source data among a plurality of classes, identify a first class loss value according to a preset method; based on the category of the class for the reconstructed source data not being matched, identify a second class loss value according to the preset method to obtain a second class loss value, wherein the second class loss value is greater than the first class loss value; obtain a distance loss value … and apply the identified first class loss value, the identified second class loss value, and the obtained distance loss value to the reconstructed source data for training the GAN:
“The domain discriminator
D
A
is the lowest network … The network
D
A
takes a pair of source (source data) and target (reconstructed source data) as input, and produces a scalar probability of whether the input pair is associated or not. Let us assume that we have a source
I
S
, its ground truth target
I
T
(positive category of class) and an irrelevant target
I
T
-
(negative category of class). We also have an inference
I
^
T
(inferred class) from the converter C. We then define the loss
L
A
D
(class loss value) of the domain discriminator
D
A
as,
PNG
media_image2.png
175
1030
media_image2.png
Greyscale
” (Yoo, page 8, paragraph 1). This loss formula is a preset method.
“The domain discriminator (component of GAN) loss is minimized (train[ed]) for training the domain discriminator while it is maximized for training the converter. The better the domain discriminator distinguishes a ground-truth
I
T
and an inference
I
^
T
, the better the converter transfers the source into a relevant target” (Yoo, page 8, paragraph 2).
Examiner’s note: The domain discriminator loss’s value depends entirely on the classes of the two pieces of source and target (reconstructed source) data being compared. When the target data has the ground truth class of the source data (
I
=
I
T
) (class match), t := 1 and a higher similarity value from the discriminator results in a lower loss value (first class loss value) in the range of (-inf, 0]. When the target data has a different class than the source data (
I
=
I
T
-
) (class not matched), t:= 0 and a higher similarity value from the discriminator results in a higher loss value (second class loss value) in the range of [0, +inf).
Yoo relates to domain transfer with GANs and is analogous to the claimed invention.
While Yoo fails to disclose the further limitations of the claim, Abrol teaches [a]n electronic apparatus generating a generative adversarial network (GAN)-based target domain, the electronic apparatus comprising: an input interface; and a processor:
“With reference to FIG. 16, an example environment 1600 for implementing various aspects of the claimed subject matter includes a computer 1602 (electronic apparatus). The computer 1602 includes a processing unit 1604, a system memory 1606, a codec 1635, and a system bus 1608” (Abrol, [0113]).
“A user enters commands or information into the computer 1602 through input device(s) 1628. … input devices connect to the processing unit 1604 (processor) through the system bus 1608 via interface port(s) 1630 (input interface[s])” (Abrol, [0119]).
Abrol’s apparatus is able to receive source data, via a communication interface included in a source domain:
“A user enters commands or information into the computer 1602 through input device(s) 1628. Input devices 1628 include, but are not limited to, a pointing device such as a mouse, trackball, stylus, touch pad, keyboard, microphone, joystick, game pad, satellite dish, scanner, TV tuner card, digital camera, digital video camera, web camera, and the like. These and other input devices connect (communicate with) to the processing unit 1604 through the system bus 1608 via interface port(s) 1630 (communication interface[s]). Interface port(s) 1630 include, for example, a serial port, a parallel port, a game port, and a universal serial bus (USB). Output device(s) 1636 use some of the same type of ports as input device( s) 1628. Thus, for example, a USB port can be used to provide input to computer 1602” (Abrol, [0119]).
“When the input image that is fed into the source domain model 108 is a target image, the resulting SD model inference output 118 (e.g., a segmentation mask) is denoted herein as
S
'
t
. When the input image is a source domain image (source data), the resulting SD model inference output (e.g., a segmentation mask) is denoted herein as
S
'
s
” (Abrol, [0061]).
Abrol relates to domain transfer with machine learning and is analogous to the claimed invention. Yoo teaches a method of reconstructing source data as target data. The claimed invention improves upon this method by executing it on computer hardware with input interfaces. Abrol teaches computer hardware with input interfaces, applicable to domain transfer methods. A person of ordinary skill in the art would have recognized that executing Yoo’s method on Abrol’s hardware would lead to the predictable result of the method being executable by a computing system which can receive input from input devices, and would improve the known device by allowing it to be performed with real data sourced from input devices (MPEP 2143 I. (D) Applying a known technique to a known device (method, or product) ready for improvement to yield predictable results).
While Abrol fails to disclose the further limitations of the claim, Deng discloses an apparatus, wherein the processor is further configured to:
identify a distance map based on a distance between feature vectors of source data included in different classes among a plurality of classes:
labeled source images (feature vectors of source data) are directly pulled (mapped) to the expected weight vectors by cross-entropy loss“ (Deng, page 29, Fig 1)
“SGC requires the images regardless of their domains to follow two types of semantic relations:
Semantic similarity. Images from the same class are semantically similar, thereby should be mapped nearby in the embedding space.
Semantic dissimilarity. Images from different classes are semantically dissimilar, thereby should be mapped far apart in the embedding space” (Deng, page 31, left column, paragraph 5). Source features are mapped to weight vectors of their corresponding classes, pushing features of the same class apart and features of the same class together.
obtain a distance loss value so as to maintain a distance between the feature vectors of the reconstructed source data corresponding to the source data based on the identified distance map:
PNG
media_image3.png
515
1152
media_image3.png
Greyscale
“Illustration of SGC effect. The overall objective is to make source (source data) and target features (target data) surround their corresponding weight vectors in the classifier, so as to ensure accurate classification. To this end, SGC aims to pull target features to source features with the same class labels. Meanwhile, labeled source images are directly pulled to the expected weight vectors by cross-entropy loss. Thus, SGC leads to more desirable embeddings, and improves the accuracy on the target dataset. In this figure, different colors denote different classes. Weight vectors W1 and W2 are corresponding to class C1 and class C2, respectively” (Deng, page 29, Fig. 1); “As shown in Fig. 1, SGC pulls the target images (reconstructed source data) to the source images (source data) with the same class labels. This indirectly enforces target images to surround their corresponding weight vectors, and thus leads to accuracy improvement on the target images” (Deng, page 29, right column, paragraph 3); “In practice, SGC is implemented by using a triplet loss (distance loss) function. By minimizing the triplet loss, SGC reduces the distance between semantically similar images and increases that of semantically dissimilar images” (Yoo, page 2, left column, paragraph 2). Source features are mapped to the weight vectors of their classes directly, while reconstructed source features are mapped to the source features. Thus, reconstructed source value clustering is based on the mapping of the source features.
Deng relates to domain translation with machine learning and is analogous to the claimed invention. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the combination of Yoo and Abrol to cluster target features based on a source feature distance map, as disclosed by Deng. This enables closely related target data to be clustered together for more accurate classification, and is robust to dramatic accuracy costs that pseudo-labeled target data can cause. See Deng, page 29, right column, paragraph 2 and page 2, left column, paragraph 1.
Regarding claim 15, the rejection of claim 12 in view of Yoo, Abrol, and Deng is incorporated. Deng further teaches an apparatus, wherein the processor is further configured to obtain a cluster loss value based on a preset method so that a distance of feature vectors of the reconstructed source data included in different classes among a plurality of classes is far apart: “SGC (preset method) requires the images regardless of their domains (images including reconstructed source data) to follow two types of semantic relations:
Semantic similarity. Images from the same class are semantically similar, thereby should be mapped nearby in the embedding space.
Semantic dissimilarity. Images from different classes are semantically dissimilar, thereby should be mapped far apart in the embedding space” (Deng, page 31, left column, paragraph 5). As discussed regarding claim 4, SGC is implemented with triplet loss, a cluster loss value.
Deng relates to domain translation with machine learning and is analogous to the claimed invention. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the combination of Yoo, Abrol, and Deng to use triplet loss to push target features of different classes apart, as disclosed by Deng. This enables closely related target data to be clustered together for more accurate classification, and is robust to dramatic accuracy costs that pseudo-labeled target data can cause for other losses. See Deng, page 29, right column, paragraph 2 and page 2, left column, paragraph 1.
Claims 4-8 and 16-17 are rejected under 35 U.S.C. 103 as being unpatentable over Yoo et al. (Pixel-Level Domain Transfer, 2016, arXiv:1603.07442v3), hereafter referred to as Yoo, in view of Abrol et al. (DOMAIN ADAPTATION USING POST-PROCESSING MODEL CORRECTION, filed 1/12/2020, US 2021/0312674 A1), hereafter referred to as Abrol, and further in view of Deng et al. (Rethinking Triplet Loss for Domain Adaptation, January 2021, IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. 31, NO. 1), hereafter referred to as Deng, and Kang et al. (Deep Adversarial Attention Alignment for Unsupervised Domain Adaptation: the Benefit of Target Expectation Maximization, 2018, arXiv:1801.10068v4), hereafter referred to as Kang.
Regarding claim 4, the rejection of claim 1 in view of Yoo, Abrol, and Deng is incorporated. Deng further teaches a method, comprising:
identifying at least one loss value among a cluster loss value by cluster loss, a class activating mapping (CAM) loss value by CAM loss, or a feature loss value by feature loss; and additionally applying at least one loss value among the identified cluster loss value, CAM loss value, or feature loss value to the reconstructed source data: “In practice, SGC is implemented by using a triplet loss (cluster loss / feature loss) function. By minimizing the triplet loss, SGC reduces the distance between semantically similar images and increases that of semantically dissimilar images” (Yoo, page 2, left column, paragraph 2). By enforcing that similar features are nearby in an embedding, classes are clustered.
Deng relates to domain translation with machine learning and is analogous to the claimed invention. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the combination of Yoo, Abrol, and Deng to use triplet loss to cluster target features together, as disclosed by Deng. This enables closely related target data to be clustered together for more accurate classification, and is robust to dramatic accuracy costs that pseudo-labeled target data can cause for other losses. See Deng, page 29, right column, paragraph 2 and page 2, left column, paragraph 1.
While Deng fails to disclose the further limitations of the claim, Kang teaches a method, comprising:
identifying at least one loss value among a cluster loss value by cluster loss, a class activating mapping (CAM) loss value by CAM loss, or a feature loss value by feature loss:
“Class activation maps (CAMs), proposed by [32], aim to visualize the class-discriminative image regions used by a CNN. Grad-CAM [20] combines gradient based attention method and CAM, enabling to obtain class-discriminative attention maps without modifying the original network structure as [32]” (Kang, page 4, paragraph 2). Attention maps are found with CAM via the Grad-CAM method.
“We propose using the source network to guide the attention alignment of the target network, as illustrated in Fig. 2. We penalize the distance between the vectorized attention maps between the source and the target networks to minimize their discrepancy. In order to make the attention mechanism invariant to the domain shift, we train the target network with a mixture of real and synthetic data from both source and target domains” (Kang, page 6, paragraph 4).
“Formally, the attention alignment penalty (CAM loss value) can be formulated as
PNG
media_image5.png
670
1593
media_image5.png
Greyscale
” (Kang, page 7, paragraph 2)
additionally applying at least one loss value among the identified cluster loss value, CAM loss value, or feature loss value to the reconstructed source data
PNG
media_image6.png
393
964
media_image6.png
Greyscale
(Kang, page 5, Fig. 2)
Through Eq. (5), the distances of attention maps for the paired images (i.e., (
x
j
S
(source data),
x
~
j
T
(reconstructed source data) and (
x
n
T
,
x
~
n
S
)) are minimized … The attention alignment penalty
L
A
T
allows the attention mechanism to be gradually adapted to the target domain, which makes the attention mechanism of the target network invariant to the domain shift” (Kang, page 7, paragraph 3). The loss function in equation 5 causes the attention maps of the target model to move toward the attention maps of the source model as training goes on.
Kang relates to domain transfer with machine learning and is analogous to the claimed invention. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the combination of Yoo, Abrol, and Deng to penalize the network with a loss that discourages distant attention maps between corresponding data in source and target networks, as disclosed by Kang. Research has found that CNN model performance is highly dependent on better aligned attention, and this mechanism degrades when a source network is directly applied to target domain data, causing greater error for classification. Thus, aligning the attention maps of the source network with the target network can significantly improve performance. See Kang, page 1, paragraph 2 and page 2, paragraph 2.
Regarding claim 5, the rejection of claim 4 in view of Yoo, Abrol, Deng, and Kang is incorporated. Deng further teaches a method, wherein the generating of the target data further comprises obtaining a cluster loss value based on a preset method so that a distance of feature vectors of the reconstructed source data included in different classes among a plurality of classes is far apart: “SGC (preset method) requires the images regardless of their domains (images including reconstructed source data) to follow two types of semantic relations:
Semantic similarity. Images from the same class are semantically similar, thereby should be mapped nearby in the embedding space.
Semantic dissimilarity. Images from different classes are semantically dissimilar, thereby should be mapped far apart in the embedding space” (Deng, page 31, left column, paragraph 5). As discussed regarding claim 4, SGC is implemented with triplet loss, a cluster loss value.
Deng relates to domain translation with machine learning and is analogous to the claimed invention. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the combination of Yoo, Abrol, Deng, and Kang to use triplet loss to push target features of different classes apart, as disclosed by Deng. This enables closely related target data to be clustered together for more accurate classification, and is robust to dramatic accuracy costs that pseudo-labeled target data can cause for other losses. See Deng, page 29, right column, paragraph 2 and page 2, left column, paragraph 1.
Regarding claim 6, the rejection of claim 4 in view of Yoo, Abrol, Deng, and Kang is incorporated. Kang further teaches a method, comprising:
identifying a weight region of the source data:
PNG
media_image7.png
501
1266
media_image7.png
Greyscale
”Attention visualization of the last convolutional layer of ResNet-50. The original target input images are illustrated in (a). The corresponding attentions of the source network (weight region[s] of the source data), the target network trained on labeled target data, and the target network adapted with adversarial attention alignment are shown in (b), (c), and (d) respectively” (Kang, page 2, Fig. 1). Figure 1 shows heatmaps corresponding to each attention, each showing the most important image region for classification.
“Zagoruyko et al. [28] define attention as a set of spatial maps indicating which area the network focuses on to perform a certain task” (Kang, page 4, paragraph 3). An attention maps (weights) a region of an image to its relative importance to a network’s task (e.g., classification).
… to be applied when classifying a class of the source data by an artificial intelligence neural network model including the source domain:
“We propose using the source network to guide the attention alignment of the target network (artificial intelligence neural network model), as illustrated in Fig. 2. We penalize the distance between the vectorized attention maps between the source and the target networks to minimize their discrepancy. In order to make the attention mechanism invariant to the domain shift, we train the target network with a mixture of real and synthetic data from both source and target domains” (Kang, page 6, paragraph 4).
“we train our target network with real and synthetic data from both source and target domains.” (Kang, page 3, paragraph 2)
obtaining a CAM loss value to set a weight region of the reconstructed source data corresponding to the identified source data:
PNG
media_image6.png
393
964
media_image6.png
Greyscale
(Kang, page 5, Fig. 2)
“Formally, the attention alignment penalty (CAM loss value) can be formulated as
PNG
media_image5.png
670
1593
media_image5.png
Greyscale
” (Kang, page 7, paragraph 2)
Through Eq. (5), the distances of attention maps for the paired images (i.e., (
x
j
S
(identified source data),
x
~
j
T
(reconstructed source data) and (
x
n
T
,
x
~
n
S
)) are minimized … The attention alignment penalty
L
A
T
allows the attention mechanism to be gradually adapted to the target domain, which makes the attention mechanism of the target network invariant to the domain shift” (Kang, page 7, paragraph 3).
Kang relates to domain transfer with machine learning and is analogous to the claimed invention. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the combination of Yoo, Abrol, and Deng to penalize the network with a loss that discourages distant attention maps between corresponding data in source and target networks, as disclosed by Kang. Research has found that CNN model performance is highly dependent on better aligned attention, and this mechanism degrades when a source network is directly applied to target domain data, causing greater error for classification. Thus, aligning the attention maps of the source network with the target network can significantly improve performance. See Kang, page 1, paragraph 2 and page 2, paragraph 2.
Regarding claim 8, the rejection of claim 4 in view of Yoo, Abrol, Deng, and Kang is incorporated. Deng further teaches a method, wherein the generating of the target data further comprises obtaining the feature loss value so that a feature vector of the source data is the same as a feature vector of the reconstructed source data corresponding to the source data: “As shown in Fig. 1, SGC pulls the target images (reconstructed source data) to the source images (source data) with the same class labels. This indirectly enforces target images to surround their corresponding weight vectors, and thus leads to accuracy improvement on the target images” (Deng, page 29, right column, paragraph 3); “In practice, SGC is implemented by using a triplet loss (feature loss value) function. By minimizing the triplet loss, SGC reduces the distance between semantically similar images and increases that of semantically dissimilar images” (Yoo, page 2, left column, paragraph 2). By pulling target features toward source features with the same class, source and target features surround the same class weight vectors and are classified similarly.
Deng relates to domain translation with machine learning and is analogous to the claimed invention. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the combination of Yoo, Abrol, Deng, and Kang to enforce class similarity between source and target features, as disclosed by Deng. This enables closely related target data to be clustered together for more accurate classification, and is robust to dramatic accuracy costs that pseudo-labeled target data can cause. See Deng, page 29, right column, paragraph 2 and page 2, left column, paragraph 1.
The analysis of claims 16-17 mirrors that of claims 4 & 6, with the exception that claims 16-17 are directed to generic computer hardware which executes the methods of claims 4 & 6. This generic hardware is taught by Abrol, as discussed regarding claim 12. Thus, claims 16-17 are rejected under the same rationales used for claims 4 & 6, respectively.
Claim 17 is rejected under 35 U.S.C. 103 as being unpatentable over Yoo et al. (Pixel-Level Domain Transfer, 2016, arXiv:1603.07442v3), hereafter referred to as Yoo, in view of Abrol et al. (DOMAIN ADAPTATION USING POST-PROCESSING MODEL CORRECTION, filed 1/12/2020, US 2021/0312674 A1), hereafter referred to as Abrol, and further in view of Deng et al. (Rethinking Triplet Loss for Domain Adaptation, January 2021, IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. 31, NO. 1), hereafter referred to as Deng, Kang et al. (Deep Adversarial Attention Alignment for Unsupervised Domain Adaptation: the Benefit of Target Expectation Maximization, 2018, arXiv:1801.10068v4), hereafter referred to as Kang, and Haddad et al. (MACHINE LEARNING BASED DEPOLARIZATION IDENTIFICATION AND ARRHYTHMIA LOCALIZATION VISUALIZATION, published 11/12/2020, US 20200357517 A1), hereafter referred to as Haddad.
Regarding claim 7, the rejection of claim 6 in view of Yoo, Abrol, Deng, and Kang is incorporated. Kang further teaches a method, wherein the weight region of the source data comprises at least one region of a specific region of image data or a specific frequency region of signal data, and wherein the weight region of the reconstructed source data comprises at least one region of a specific region of image data or a specific frequency region of signal data:
PNG
media_image7.png
501
1266
media_image7.png
Greyscale
”Attention visualization of the last convolutional layer of ResNet-50. The original target input images are illustrated in (a). The corresponding attentions of the source network (weight region[s] of the source data), the target network trained on labeled target data, and the target network (weight region[s] of the reconstructed source data) adapted with adversarial attention alignment are shown in (b), (c), and (d) respectively” (Kang, page 2, Fig. 1). Figure 1 shows heatmaps corresponding to each attention, each showing the most important image region for classification.
Kang relates to domain transfer with machine learning and is analogous to the claimed invention. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the combination of Yoo, Abrol, and Deng to penalize the network with a loss that discourages distant attention maps between corresponding data in source and target networks, as disclosed by Kang. Research has found that CNN model performance is highly dependent on better aligned attention, and this mechanism degrades when a source network is directly applied to target domain data, causing greater error for classification. Thus, aligning the attention maps of the source network with the target network can significantly improve performance. See Kang, page 1, paragraph 2 and page 2, paragraph 2.
While Yoo, Abrol, Deng, and Kang fail to disclose the further limitations of the claim, Haddad discloses a method, wherein the weight region of the source data and of the reconstructed source data comprises at least one region of a specific region of image data or a specific frequency region of signal data: “Class activation mapping may make it possible to identify regions of an input time series, e.g., of cardiac EGM data, that constitute the reason for the time series being given a particular classification by the one or more arrhythmia classification machine learning models 452. A class activation map for a given classification may be a univariate time series where each element (e.g., at each timestamp at the sampling frequency of the input time series) may be a weighted sum or other value derived from the outputs of an intermediate layer of a neural network or other machine learning model. The intermediate layer may be a global average pooling layer and/or last layer prior to the output layer neurons for each classification.” (Haddad, [0061])
Haddad relates to CAM calculations on convolutional neural networks and is analogous to the claimed invention. The combination of Yoo, Abrol, Deng, and Kang teaches a system for shifting domain data, in part using a loss that enforces CAM attention similarity between source and reconstructed source images. The claimed invention improves upon this method by weighting frequency regions of signal data as well as image regions. Haddad teaches a method of using CAM to calculate weighted importance of frequency regions in signal data, applicable to the existing combination. A person of ordinary skill in the art would have recognized that using Haddad’s CAM signal calculations would lead to the predictable result of allowing the system to perform on both image and signal frequency data, and would improve the known device by expanding its application to more sources of data that could benefit from domain transfer (MPEP 2143 I. (D) Applying a known technique to a known device (method, or product) ready for improvement to yield predictable results).
Response to Arguments
The following responses address arguments and remarks made in the instant remarks dated 01/09/2026.
Examiner Corrections
Pages from Deng were previously labeled based on page count from the PDF file, not the count actually listed on the pages themselves. This has been corrected in the instant office action.
Objections
The Examiner notes that new objections to the specification have been found in light of the instant amendments.
On page 8 of the instant remarks, the Applicant argues that, in light of the instant amendments, the blurry equations and terms of the specification are not necessary for one of ordinary skill to understand the core invention:
“II. Objection to the Specification
The disclosure is objected to over informalities. More specifically, the Office
Action objects to paragraphs [0034], [0054]-[0055], [0057], [0059], [0061]-[0062],
[0064], and [ 0066]-[0068] of the specification and alleges that the Equations described
in the specification are "fuzzy and very difficult to parse."
In response, Applicant has amended the relevant portions of the specification
(paragraphs [0056], [0061 ], [0067], [0071], and [007 4]) to clarify that the technical
scope of the present invention is not limited by any particular mathematical formula
itself, and that the Equations are merely exemplary descriptions provided to facilitate
understanding of the invention.
Specifically, at the end of each Equation, the specification now explicitly states
that (l) the Equation is an example for explaining the functional role that a particular
loss value performs during the training process, and (2) the present invention is not
limited to the specific mathematical form of that Equation.
Accordingly, Applicant submits that the amended specification describes, such
that a person of ordinary skill in the art can sufficiently understand and implement,
regardless of the detailed form of the Equations, the GAN training control concept that
constitutes the gist of the present invention, namely, (1) maintaining class consistency
and (2) maintaining inter-class feature relational structure.
Therefore, Applicant submits that the above-identified amendments to the
aforementioned paragraphs of the specification obviates this objection and withdrawal
of the objection is respectfully requested.”
Regarding the assertion that the equations described by the specification are incidental for a person of ordinary skill in the art to sufficiently understand and implement the claimed invention, the Examiner respectfully disagrees. As evident by the claim language and the Applicant’s own arguments elsewhere in the instant remarks, the losses are a pivotal part of the claimed invention’s functionality. While the equations shown in the instant specification are merely illustrative and not limiting, they significantly aid in greater understanding of the losses applicable to the claimed invention.
Additionally, the specification, except as provided for in 37 CFR 1.821 through 1.825, must have text written plainly and legibly either by a typewriter or machine printer in a nonscript type font (e.g., Arial, Times Roman, or Courier, preferably a font size of 12) lettering style having capital letters which should be at least 0.3175 cm. (0.125 inch) high, but may be no smaller than 0.21 cm. (0.08 inch) high (e.g., a font size of 6) in portrait orientation and presented in a form having sufficient clarity and contrast between the paper and the writing thereon to permit the direct reproduction of readily legible copies in any number by use of photographic, electrostatic, photo-offset, and microfilming processes and electronic capture by use of digital imaging and optical character recognition; and only a single column of text. See 37 CFR 1.52(a) and (b). The fuzzy equations of the specification and their associated fuzzy variables are not considered to be legible.
112 Rejections
In light of the instant amendments, previous objections under 35 U.S.C. 112(b) have been withdrawn.
101 Rejections
On page 10 of the instant remarks, the Applicant argues that the claims are directed to statutory subject matter, and thus rejections under 35 U.S.C. 101 are obviated:
“By this Amendment, claims 1, 7, and 12 are amended and claims 2, 3, 13, and 14 are cancelled. Applicant thus submits that claims 1, 4-12, and 15-17 are directed to statutory subject matter. Therefore, the rejections to claims 1-17 are obviated”
The Examiner agrees that the claims are directed to valid statutory categories. However, this does not exempt a claim from containing limitations that recite judicial exceptions, such as abstract ideas. As noted in MPEP 2106.04(a), Examiners should determine whether a claim recites an abstract idea by (1) identifying the specific limitation(s) in the claim under examination that the examiner believes recites an abstract idea, and (2) determining whether the identified limitations(s) fall within at least one of the groupings of abstract ideas listed above.
As noted in the 101 rejections section, claims 1-17 recite abstract ideas. Thus, no rejections are withdrawn on these grounds.
On pages 10-11 of the instant remarks, the Applicant argues that amended claim 1 does not recite abstract ideas:
“Amended claim 1 does not claim a mere mathematical calculation or an abstract
idea, but clearly defines a specific technical processing performed in an electronic
apparatus.
Specifically, in the amended independent claim, the process of receiving the
source data is clearly described as follows:
"receiving, via the communication interface in duded in the electronic
apparatus, source data included in a source domain"
Reply to Office Action of: November 18, 2025
Through this, it is clearly specified that the source data is data received from
outside the electronic apparatus, and that the reception is performed via a
communication interface included in the electronic apparatus.
In addition, in consideration of the Examiner's comments, Applicant added the
following feature to the claim in order to clarify that the process of generating the target
data is not a mere abstract processing or a mere recitation of a result, but a technical
processing including a specific learning mechanism.
‘generating target data by training the reconstructed source data based on the
source data using a generative adversarial network (GAN);'"
"applying the identified first class loss value, the identified second class loss
value and the obtained distance loss value to the reconstructed source data for training
the GAN.’
In other words, by specifying a feature in which a class loss value and a distance
loss value are applied to control training during the GAN training process, it is made
clear that the generation of the target data is a technical process including a specific
artificial intelligence learning structure and application of a loss function.
As described above, in response to the Examiner's comments, Applicant (l)
clarified the components and path for receiving the source data, and (2) clarified that
the generation of the target data is a specific technical processing based on GAN
training.
As such, claim 1 combines specific hardware components (the electronic
apparatus and the communication interface) with a specific artificial intelligence
learning structure (GAN) and a loss-function application scheme, and therefore does
not correspond to a mere abstract idea or a mere listing of mathematical formulas.”
In regards to the Applicant’s arguments above, the Examiner respectfully disagrees that claim 1, as amended, recites no mental processes. As stated in MPEP 2106.04(a)(2)(III), The courts do not distinguish between mental processes that are performed entirely in the human mind and mental processes that require a human to use a physical aid (e.g., pen and paper or a slide rule) to perform the claim limitation. See, e.g., Benson, 409 U.S. at 67, 65, 175 USPQ at 674-75, 674 … Nor do the courts distinguish between claims that recite mental processes performed by humans and claims that recite mental processes performed on a computer. As the Federal Circuit has explained, "[c]ourts have examined claims that required the use of a computer and still found that the underlying, patent-ineligible invention could be performed via pen and paper or in a person’s mind." Versata Dev. Group v. SAP Am., Inc., 793 F.3d 1306, 1335, 115 USPQ2d 1681, 1702 (Fed. Cir. 2015). See also Intellectual Ventures I LLC v. Symantec Corp., 838 F.3d 1307, 1318, 120 USPQ2d 1353, 1360 (Fed. Cir. 2016) (‘‘[W]ith the exception of generic computer-implemented steps, there is nothing in the claims themselves that foreclose them from being performed by a human, mentally or with pen and paper.’’); Mortgage Grader, Inc. v. First Choice Loan Servs. Inc., 811 F.3d 1314, 1324, 117 USPQ2d 1693, 1699 (Fed. Cir. 2016) (holding that computer- implemented method for "anonymous loan shopping" was an abstract idea because it could be "performed by humans without a computer").
Claim 1 recites limitations amounting to mental processes performed on generic data structures and generic computer hardware. Generic data structures amount to generic computer components and are insufficient to render a mentally performable task non-abstract. For example, claim 1 recites the limitation “generating target data by training the reconstructed source data based on the source data using a generative adversarial network (GAN)”, reciting a mentally performable process (“generating target data”) performed with a generic training process based on another judicial exception (“by training the reconstructed source data based on the source data”) and a generic data structure (“using a generative adversarial network GAN”). Simply applying a mental process to a generic training process does not render it not mentally performable.
See the 101 rejections section for more detail on judicial exceptions recited by amended claim 1 using similar reasoning. No rejections are withdrawn on these grounds.
On page 11 of the instant remarks, the Applicant argues that the claimed invention represents a technical improvement to computer technology:
“Accordingly, amended claim 1 provides a specific technical improvement in the
field of computer technology, namely, a technical solution that maintains learning
stability and class discrimination performance during cross-domain data generation,
and thus satisfies patent eligibility under §101.”
In response to the Applicant’s argument that the claimed invention improves upon existing technology, the Examiner respectfully disagrees. The improvement of a claimed invention must be sufficiently detailed, as noted in MPEP 2106.05(a): “If it is asserted that the invention improves upon conventional functioning of a computer, or upon conventional technology or technological processes, a technical explanation as to how to implement the invention should be present in the specification. That is, the disclosure must provide sufficient details such that one of ordinary skill in the art would recognize the claimed invention as providing an improvement. The specification need not explicitly set forth the improvement, but it must describe the invention such that the improvement would be apparent to one of ordinary skill in the art … After the examiner has consulted the specification and determined that the disclosed invention improves technology, the claim must be evaluated to ensure the claim itself reflects the disclosed improvement in technology. Intellectual Ventures I LLC v. Symantec Corp., 838 F.3d 1307, 1316, 120 USPQ2d 1353, 1359 (Fed. Cir. 2016) (patent owner argued that the claimed email filtering system improved technology by shrinking the protection gap and mooting the volume problem, but the court disagreed because the claims themselves did not have any limitations that addressed these issues). That is, the claim must include the components or steps of the invention that provide the improvement described in the specification.”
The Applicant merely asserts improvements, but doesn’t explain how the claimed invention improves upon learning stability or class discrimination performance. Additionally, as noted in the MPEP above, this explanation of improvements must be supported by the specification, such that the improvement would be apparent to one of ordinary skill in the art, and the Applicant must ensure the argued improvements are represented in the claims.
Thus, the argument for improvement is considered to be insufficient, and no rejections are withdrawn on these grounds.
On pages 11-12 of the instant remarks, the Applicant argues that the dependent claims do not recite abstract ideas:
“Claims 4-12 and 15-17 are all dependent claims dependent on claim 1, and
merely add additional limitations regarding loss-function application schemes, training
control methods, or data characteristics based on the feature of claim 1.
In other words, claims 4-12 and 15-17 include GAN training structure based on
an electronic apparatus recited in claim 1 as it is, and further specify its concrete
implementation method or application conditions. The dependent claims merely limit
or specify the technical concept of the independent claim, and do not introduce a new
abstract concept separate from the independent claim.
Therefore, as long as claim 1 satisfies patent eligibility, the dependent claims,
which incorporate the technical feature of claim 1, cannot be considered abstract ideas
and therefore also satisfy patent eligibility under § 101.
Accordingly, withdrawal of the Rejections Under 35 U.S.C. §101 is respectfully
requested.”
Regarding the Applicant’s arguments above, the Examiner respectfully disagrees. As noted in previous arguments, claim 1 is found to recite judicial exceptions, recitations which are applicable to substantially similar independent claim 12 and inherited by all dependent claims.
Additionally, more judicial exceptions are recited by the dependent claims. For example, claim 4 recites “identifying at least one loss value among a cluster loss value by cluster loss, a class activating mapping (CAM) loss value by CAM loss, or a feature loss value by feature loss”, which can be performed mentally.
As noted in the 101 rejections section, the additional elements of the claims are not found to be sufficient to practically integrate the claimed invention or amount to significantly more than the recited judicial exceptions. Thus, no claims are found to be eligible under 35 U.S.C. 101, and no rejections are withdrawn on these grounds.
103 Rejections
On page 13 of the instant remarks, the Applicant argues that Yoo, Arbol, and Deng fail to disclose amended claim 1:
“Independent claim 12, although different in scope, is amended to include similar
features as amended independent claim 1 and is allO\vable for at least the reasons that
independent claim 1 is allowable.
Applicant submits that the cited references fail to disclose or render obvious the
presently claimed combination of features recited in independent claim 1.
The amended independent claim, in generating a GAN-based target domain,
necessarily includes the featmes of (1) applying a class loss such that different loss
values are assigned depending on whether the class of the reconstructed source data
matches or does not match the class of the source data (as recited in dependent claim
2), and (2) applying a distance loss such that such that a distance relationship (distance
map) among feature vectors belonging to different classes in the source domain is
defined and GAN training is controlled so that the distance relationship is maintained
during generation of the target data (dependent claim 3).
In addition, it necessarily includes the operation of applying both the class loss
value and the distance loss value to the reconstructed source data, rather than applying
at least one of the class loss value or the distance loss value to the reconstructed source
data.
In other words, the core of the present invention lies not in simply using
individual losses, but in a technology for controlling learning so that the inter-class
relational structure learned in the source domain is maintained in the target domain.
Yoo discloses a technology for transforming data distributions using GAN or
domain adaptation, but does not disclose assigning loss values differentially depending
on whether classes match, nor the feature of maintaining inter-class feature distance
relationships. Arbol may disclose that class information can be used in learning, but
does not disclose controlling loss magnitude depending on class matching or
mismatching, nor the concept of maintaining the relative structure of inter-class feature
distances. Deng may disclose a technology that considers feature similarity or
separability, but does not disclose the feature of defining a class-based distance map
based on the source domain and controlling learning so as to maintain the distance map
in a GAN-based target domain generation process.
Even if Yoo, Arbo!, and Deng are combined, it is difficult to consider that a
person of ordinmy skill in the m1 \vould anive at the following features of the present
invention:
(1) first defining relative distance relationships among featw·e vectors of
different classes in the source domain,
(2) controlling training so that the distance relationships are maintained even
during generation of the target data using GAN, and
(3) simultaneously assigning loss values differentially depending on whether
classes match.
The cited references individually address class utilization, feature distance, or
GAN training, but do not provide a technical concept or motivation for controlling
training by organically combining multiple losses in a direction that preserves interclass
relational structure as in the present invention.”
In response to the Applicant's argument that the relied upon prior art fails to disclose limitations of amended claim 1, the Examiner notes that one cannot show nonobviousness by attacking references individually where the rejections are based on combinations of references. See In re Keller, 642 F.2d 413, 208 USPQ 871 (CCPA 1981); In re Merck & Co., 800 F.2d 1091, 231 USPQ 375 (Fed. Cir. 1986). Yoo discloses a class loss value that applies different values to train the discriminator depending on whether or not the reconstructed source data matches the source data (Yoo, page 8, paragraph 1). While Yoo fails to disclose receiving source data from a communication interface, this deficiency is remedied by Abrol (Abrol, [0061]). While Yoo and Abrol fail to disclose using a distance-based loss to maintain class distance relationships across source data into reconstructed source data, this limitation is disclosed by Deng (Deng, page 29, Fig. 1 & page 1, right column, paragraph 3 & page 2, left column, paragraph 2.
With regards to motivation, Yoo already discloses using multiple loss functions to maintain key properties of reconstructed source data including class relationships from the source data during GAN training, as described in the cited sections above and in more detail in the 103 rejections section. It would have been obvious for one of ordinary skill to execute Yoo’s method on actual hardware with the ability to receive data, as disclosed by Abrol. Deng’s distance-based loss allows for more accurate classification on reconstructed data, and is robust to accuracy costs that pseudo-labeled target data (an alternative method for achieving this) can cause (Deng, page 29, right column, paragraph 2 & page 2, left column, paragraph 1), sufficient motivation for one of ordinary skill to combine it with Yoo’s losses.
While none of these references alone discloses all limitations of amended claim 1, the combination certainly does, and one of ordinary skill in the art would have been motivated to form such a combination, as detailed in greater detail in the 103 rejections section. Thus, no rejections are withdrawn on these grounds.
On page 14 of the instant remarks, the Applicant argues that the claimed invention has different effects than those of the relied upon prior art:
“In addition, compared with Yoo, Arbol, and Deng, the present invention
provides the foll owing qualitatively different technical effects:
(1) solving a problem in which feature distributions may not be maintained
during the GAN training process,
(2) maintaining, even in the target domain, the class discrimination capability
and relational structure learned by the source model, and
(3) seeming stable performance in a new device or environment while reducing
the burden of retraining.
These effects go beyond mere accuracy improvement or feature alignment, and
are not effects that can be easily predicted from the cited references.
Accordingly, Applicant submits that claim l is patentable over the combined
references.
Therefore, Applicant submits that the cited references fail to disclose or render
obvious the presently claimed features recited in independent claims land 12. As such,
the rejection under 35 U.S.C. § 103 is improper.
Accordingly, withdrawal of the rejection is respectfully requested.”
In response to applicant's arguments above, the fact that the inventor has recognized another advantage which would flow naturally from following the suggestion of the prior art cannot be the basis for patentability when the differences would otherwise be obvious. See Ex parte Obiaya, 227 USPQ 58, 60 (Bd. Pat. App. & Inter. 1985).
The effects of the claimed invention argued by the Applicant would have been obvious to one of ordinary skill in the art in view of Yoo and Deng. Yoo explicitly provides a solution to maintaining feature distributions across GAN training, by ensuring consistency between source and target domain data through GAN discriminators (Yoo, Abstract & 1. Introduction). Yoo also explicitly states that its method is an alternative to traditional Deng provides a method that maintains relational structures between source and reconstructed source (target) data, one which explicitly maintains discriminative capabilities such as classification (Deng, page 29, right column, paragraphs 2-3 & page 2, left column, paragraphs 1-2). Deng also explicitly states that its method is an alternative to traditional retraining (Deng, page 30, right column, paragraph 3), which would obviously result in performance improvements by reducing burdens associated with retraining.
On page 15 of the instant remarks, the Applicant argues that the cancelled claims should have rejections withdrawn:
“Dependent Claims 2-3 and 13-14
By this amendment, claims 2-3 and 13-14 are cancelled. Accordingly. the
rejection of claims 2-3 and 13-14 is moot.
Accordingly, withdrawal of the rejections is respectfully requested.”
In light of the instant amendments, rejections of the cancelled claims have been withdrawn.
On page 15 of the instant remarks, the Applicant argues that dependent claims are patentable in view of their parent claims:
“Dependent Claims 6-7 and 17
Claims 6-7 and l 7 variously depend from independent claims 1 and 12.
Because the applied references fail to disclose or render obvious the features presently
recited in independent claims l and 12, dependent claims 6-7 and 17 are patentable for
at least the reasons that claims 1 and 12 are patentable, as well as for the additional
features recited therein.
Accordingly, withdrawal of the rejections is respectfully requested.”
Regarding the Applicant’s arguments above, the Examiner respectfully disagrees. As discussed in previous responses above, claim 1 is not found to be patentable over the prior art. Similar reasoning is applicable to substantially similar independent claim 12.
Thus, no rejections of the dependent claims are withdrawn on these grounds.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure:
Haas et al. (Composite simulation modeling and analysis, published 7/31/2014, US20140214383A1) teaches a method of shifting the domain of data generated by a first model for use as input into a second model.
Fernando et al. (Unsupervised Visual Domain Adaptation Using Subspace Alignment, 2013, 2013 IEEE International Conference on Computer Vision) teaches a method of shifting the domain of image data by aligning source and target subspaces.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Aaron P Gormley whose telephone number is (571)272-1372. The examiner can normally be reached Monday - Friday 12:00 PM - 8:00 PM EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Michelle T Bechtold can be reached at (571) 431-0762. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/AG/Examiner, Art Unit 2148 /MICHELLE T BECHTOLD/Supervisory Patent Examiner, Art Unit 2148