DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of Claims
Due to communications filed 7/21/16, the following is a non-final office action. Claims 1 and 15 are amended. Claim 5 is cancelled. Claim 17 is new. Claims 1-4, and 6-17 are pending in this application and are rejected as follows.
Claim Rejections - 35 USC §101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title,
Claims 1-4, and 6-17 are rejected under 35 U.S.C, 101 because the claimed invention is directed
to a judicial exception (I.e., a law of nature, a natural phenomenon, or an abstract idea) without
significantly more.
With regard to the present claims 1-4, and 6-17, these claim recites a series of steps and,
therefore, is a process, and ultimately, is statutory.
In addition, the claim recites a judicial exception. The claims as a whole recite "Mental
Processes". The claimed invention is a method that allows for access, analysis, update and
communication of electronic records, which are concepts performed in the human mind (including an
observation, evaluation, judgment, opinion). The mere nominal recitation of a generic
computer/computer network does not take the claim out of the "Mental Processes" grouping. Thus, the
claim recites an abstract idea.
Furthermore, the claims are not integrated into a practical application. The claim as a whole
merely describes how to generally "apply" the concept of accessing, analyzing, updating and
communicating information in a computer environment. The claimed computer components are recited
at a high level of generality and are merely invoked as tools to perform an existing records update
process. Simply implementing the abstract idea on a generic computer is not a practical application of
the abstract idea.
Finally, the claims do not recite an inventive concept. As noted previously, the claim as a whole
merely describes how to generally "apply" the concept of accessing, analyzing, updating and
communicating information in a computer environment. Thus, even when viewed as a whole, nothing in
the claim adds significantly more (i.e., an inventive concept) to the abstract idea. The claim is ineligible.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102
and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory
basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of
rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same
under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections
set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is
not identically disclosed as set forth in section 102, if the differences between the claimed invention
and the prior art are such that the claimed invention as a whole would have been obvious before the
effective filing date of the claimed invention to a person having ordinary skill in the art to which the
claimed invention pertains. Patentability shall not be negated by the manner in which the invention
was made.
Claim(s) 1-9, 12-14, 16, 17 are rejected under 35 U.S.C. 103 as being unpatentable over Guinard
et al (US 20210142337 A1), and further in view of LILLY et al (KR 20180052626 A), and further in view of Aghakhani et al “Detecting Deceptive Reviews using Generative Adversarial Networks”.
As per claim 1, Guinard et al discloses:
receiving, by one or more processors of a processing system, one or more images of one or
more views of the item, said images having been captured by a mobile communications device of a user,
([0048] Furthermore, electronic device 110 may acquire one or more images of the product and/or the
tag or label associated with the product);
analysing said captured images, by the processing system, according to one or more trained
machine learning models to provide an identification of the item, ([0058] In some embodiments, the
product authenticity score is determined from one or more authenticity features using one or more
pretrained machine-learning model and/or one or more pretrained neural networks (such as a
convolutional neural network or a recursive neural network), which use the one or more authenticity
features as inputs, and which output the product authenticity score);
determining, by the processing system, using said trained machine learning model, based on the identification of the item, a risk profile for the item, ([0058] In some embodiments, the product authenticity score may have three values (such as low risk, moderate risk and high risk) or may be quantitative (such as an authenticity probability equal to or between 0 and 100%...In some embodiments, the product authenticity score is determined from one or more authenticity features using one or more pretrained machine-learning model and/or one or more pretrained neural networks (such as a convolutional neural network or a recursive neural network), which use the one or more authenticity features as inputs, and which output the product authenticity score); and
sending, from the processing system to the user device, a confidence score based at least on
said risk profile, the confidence score giving an indication of how likely it is that the item is an authentic
item and/or is not a product of an illegitimate use of a proprietary or otherwise protected or controlled
process, ([0035], Thus, the authenticity verification techniques may facilitate an increase in authorized
commercial activity, may improve user confidence in products, and may improve the user experience
when determining whether a product is authentic.);
wherein: the item is marked with a machine-readable mark in which at least a unique identifier
of the item is encoded, ([0097] In some embodiments, a product may be associated with a (digital)
product identity using a uniform resource locator. The uniform resource locator may be encoded in a
machine-readable way (such as a QR code). This approach may allow at least two factors to be used to
authenticate a product, such as digital product data and physical attributes of a product. ); and
the method further comprising: receiving, at the processing system, the unique identifier of the item, either: by the processing system decoding the unique identifier from the mark in at least one of said captured images; or by receiving the unique identifier from the mobile communications device, ([0043] In the described embodiments processing a packet or frame in electronic device 110 and/or access point 114 includes: receiving signals (such as wireless signals 128) with the packet or frame; decoding/extracting the packet or frame from received wireless signals 128 to acquire the packet or frame; and processing the packet or frame to determine information contained in the packet or frame.);
based on said check, adjusting the risk profile by the processor; and updating the confidence
score, by the processing system, based on the adjusted risk profile, (Abstract: Moreover, the computer
may determine a product authenticity score based at least in part on a comparison of the information
and the second information);
Guinard et al does not disclose the following limitations, however, LILLY et al discloses:
the processing system has access to a database of legitimate items, said database of legitimate
items comprising a plurality of legitimate unique identifiers of a corresponding plurality of legitimately
produced items; accessing the database of legitimate items, by the processing system, to check at least
whether the unique identifier of the item corresponds to a legitimately produced item, (LILLY et al (KR
20180052626 A) discloses: "In another embodiment, the absence of return signals or return signals from
tags 402 may be used to identify counterfeit goods or authenticate legitimate goods. In one exemplary
approach, each legitimate object is associated with a tag 402 having a unique identifier. At various
times, the tag identifier may be checked against a database of tag identifiers known to be associated
with legitimate products. Exemplary times at which goods are inspected are determined when passing
through customs control checkpoints and when ownership or rights to the goods occur between the
parties (eg, from the manufacturer to the importer, from the importer to the distributor, from the
distributor to the store owner, From the store owner to the consumer). If there is a match between the
received tag identifier and the database of known known tag identifiers, the goods may be cleared by
the customs authority or received by the receiving party. If there is no match or there is no tag 402, the
customs authority may confiscate the product and conduct an investigation, or the receiving party may
reject the product");
said unique identifier of the item persisting in the database of legitimate items after said check.
(LILLY et al “Another precaution is to maintain a database of tag identifiers for all tags 402 that must be present at site 424 in computer system 418 that processes information from readers 408 at site 424”).
It would have been obvious to one of ordinary skill in the art at the time of the invention to
include the above limitations as taught by LILLY et al in the systems of Guinard et al, since the claimed
invention is merely a combination of old elements, and in the combination each element merely would
have performed the same function as it did separately, and one of ordinary skill in the art would have
recognized that the results of the combination were predictable.
identifying the item as having been illegitimately produced either by making a copy of a legitimately produced item or through unauthorized use of a legitimate process for producing the item, (Aghakhani et al: Page 89, col 2, para 4, lines 1-10: “To address the limitations of the existing techniques, we propose FakeGAN, which is a technique based on Generative Adversarial Network (GAN) [14]. GANs are a class of artificial intelligence algorithms used in unsupervised machine learning, implemented by a system of two neural networks contesting with each other in a zero-sum game framework. GANs have been used mostly for image-based applications [14], [15], [16], [17]. In this paper, for the first time, we propose the use of GANs for a text classification task, i.e., detecting deceptive reviews”; ALSO SEE: Page 89, col 2, para 5, lines 3- Page 90, col 1, para 1, lines 1-8: “FakeGAN uses two discriminator models D, D’ and one generative model G. The discriminator model D tries to distinguish between truthful and deceptive reviews whereas D’ tries to distinguish between reviews generated by the generative model G and samples from deceptive reviews distribution. The discriminator model D’ helps G to generate reviews close to the deceptive reviews distribution, while D helps G to generate reviews which are classified by D as truthful.”).
said trained machine learning models generative AI models configured to combine multiple different sources of information, including image data, text data, time data, news information, and/or geographical data, (Aghakhani et al: Page 90, col 1, para 2, lines 1-7, “Our intuition behind using two discriminators is to create a stronger generator model. If in the adversarial learning phase, the generator gets rewards only from D, the GAN may face the mod collapse issue [20], as it tries to learn two different distributions (truthful and deceptive reviews). The combination of D and D’ trains G to generate better deceptive reviews which in turn train D to be a better discriminatoR”).
It would have been obvious to one of ordinary skill in the art at the time of the invention to
include the above limitations as taught by Aghakhani et al in the systems of Guinard et al, since the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable.
As per claim 2, Guinard et al discloses:
wherein: the mobile communications device is configured to decode the unique identifier from
the mark in at least one of said captured images; or the mark is in a human-readable and/or human-
decodable format, the user entering the unique identifier into the mobile communications device.
[0043] In the described embodiments processing a packet or frame in electronic device 110 and/or
access point 114 includes: receiving signals (such as wireless signals 128) with the packet or frame;
decoding/extracting the packet or frame from received wireless signals 128 to acquire the packet or
frame; and processing the packet or frame to determine information contained in the packet or frame.
As per claim 3, Guinard et al discloses:
wherein the database of legitimate items further includes, for one or more of the unique
identifiers, historical data related to the corresponding legitimately produced item, said historical data
indicating that said unique identifier has already been checked, said adjustment to the risk profile and
the subsequent updating of the confidence score resulting in a reduction in the likelihood that the item
is an authentic item and/or is not a product of an illegitimate use of a proprietary or otherwise
protected or controlled process, (Then, the computer may access, based at least in part on the identifier,
stored second information about the product that specifies: a history of the product. Moreover, the
computer may determine a product authenticity score based at least in part on a comparison of the
information and the second information).
As per claim 4, Guinard et al discloses:
wherein the database of legitimate items further includes, for one or more of the unique
identifiers, commercial data related to the corresponding legitimately produced items, said commercial
data indicating that the item having said unique identifier was intended for sale in one or more first
geographical regions, the method further comprising: retrieving from the mobile communications
device, by the processing system, a second geographical location, said second geographical location
being a geographical location of the mobile communications device when the image of the item was
captured; said adjustment to the risk profile and the subsequent updating of the confidence score
resulting in a reduction in the likelihood that the item is an authentic item and/or is not a product of an
illegitimate use of a proprietary or otherwise protected or controlled process if the second geographical
location is not the same as any of the first geographical locations, ([0035] By selectively providing the
notification, these authenticity verification techniques may determine whether the product is
potentially fraudulent (or not authentic) or is unauthorized without requiring the use of specialized tags
or labels. Moreover, the authenticity verification techniques may allow a user of the electronic device
(such as a cellular telephone) to check and/or confirm the authenticity and/or provenance of the
product in a seamless manner and without requiring special training. Consequently, the authenticity
verification techniques may provide improved authentication of products. This capability may reduce or
eliminate counterfeiting of the products and unauthorized use of brands associated with the products.
Thus, the authenticity verification techniques may facilitate an increase in authorized commercial
activity, may improve user confidence in products, and may improve the user experience when
determining whether a product is authentic).
As per claim 6, Guinard et al discloses:
the method further comprising: comparing, by the processing system, a language of one or
more words from the image with a name of the corresponding item according to the database of
legitimate products, ([0053] Moreover, computer 120-1 may determine a product authenticity score
based at least in part on a comparison of the information and the second information. Notably, the
product authenticity score may indicate that the product is potentially fraudulent or is unauthorized
when the environment is different from the expected environment. For example, the information may
specify...a language spoken by other individuals in the environment, one or more images of the
environment, etc.).
As per claim 7, Guinard et al discloses:
the method further comprising: comparing, by the processing system, a language of one or
more words from the image with the first geographical region, ([0053] Moreover, computer 120-1 may
determine a product authenticity score based at least in part on a comparison of the information and
the second information. Notably, the product authenticity score may indicate that the product is
potentially fraudulent or is unauthorized when the environment is different from the expected
environment. For example, the information may specify a location of the product, either directly (such
as GPS coordinates, triangulation and/or trilateration information, a cellphone carrier in a city, a state,
or a country, etc.) and/or indirectly (such as a temperature, a barometric pressure, a magnetometer
reading, from the sound in the environment, e.g., a language spoken by other individuals in the
environment, one or more images of the environment, etc.)).
As per claim 8, Guinard et al discloses:
retrieving from the mobile communications device, by the processing system, one or more from:
an identifier of the mobile communications device; and a time when the image was captured, ([0053]
For example, the temperature, the barometric pressure, the magnetometer reading, the sound in the
environment and/or the one or more images of the environment may be analyzed to determine the one
or more attributes (such as the location, a time of day).
As per claim 9, Guinard et al discloses:
wherein the processing system is located in a remote server, ([0005] In a first group of
embodiments, a computer that performs authenticity verification is described. This computer may
include: a network interface that communicates with an electronic device (which may be remotely
located from the computer); a processor; and memory that stores program instructions).
As per claim 12, Guinard et al discloses:
wherein said item is a packaging for a proprietary or otherwise protected or controlled
pharmaceutical product, ([0047] Note that the identifier may include or may be compatible with one or
more of... a pharmaceutical product identifier (PhPID)).
As per claim 13, Guinard et al discloses:
a mobile communications device communicably connectable to a remote server, the server
comprising one or more processors and a memory and having access to a database of legitimate items,
the system being configured to carry out the steps of the method of claim 1, ([0097] In some
embodiments, a product may be associated with a (digital) product identity using a uniform resource
locator. The uniform resource locator may be encoded in a machine-readable way (such as a QR code).
This approach may allow at least two factors to be used to authenticate a product, such as digital
product data and physical attributes of a product.); [0005] In a first group of embodiments, a computer
that performs authenticity verification is described. This computer may include: a network interface that
communicates with an electronic device (which may be remotely located from the computer); a
processor; and memory that stores program instructions.; ALSO please see the rejection of independent
claim 1).
As per claim 14, please see the rejection of independent claim 1.
As pr claim 16, Guinard et al discloses:
wherein said trained machine learning model is further configured to take account of one or
more real-time indicators, such as one or more news reports or one or more notifications concerning
the existence of known counterfeiting activities, ([0016] Then, the electronic device may provide the
information addressed to a computer that specifies the identifier of the product and: the environment
that includes the product, and/or the individual associated with the product. Next, the electronic device
may receive a notification associated with the computer, where the notification corresponds to a
product authenticity score of the product).
As per claim 17, Guinard et al does not disclose: wherein said generative AI model is a transformer-based model.
However, Lilly discloses: “Under this condition, the receiving structure can be considered to be " mode-matched " with the surface waveguide. A transformer link and / or impedance matching network 324 around the structure may be inserted between the probe and the electrical load 327 to couple power to the load. Inserting the impedance matching network 324 between the probe terminals 321 and the electrical load 327 may achieve a conjugate-match condition for maximum power transfer to the electrical load 327.“).
It would have been obvious to one of ordinary skill in the art at the time of the invention to
include the above limitations as taught by LILLY et al in the systems of Guinard et al, since the claimed
invention is merely a combination of old elements, and in the combination each element merely would
have performed the same function as it did separately, and one of ordinary skill in the art would have
recognized that the results of the combination were predictable.
Claim(s) 10-11 is/are rejected under 35 U.S.C. 103 as being unpatentable over Guinard et al (US 20210142337 A1), and further in view of LILLY et al (KR 20180052626 A), and further in view of Aghakhani et al “Detecting Deceptive Reviews using Generative Adversarial Networks”, and further in view of McCleland et al (US 20160381200 A1).
As pr claim 10, Guinard et al does not disclose:
wherein the unique identifier of the item is encoded in a linear barcode or in a matrix barcode
visible on the item whose authenticity is to be verified.
However, McCleland et al (US 20160381200 A1) discloses:
[0076] The electronic device then, at a following stage (408), generates an optical device
identifier. The optical device identifier is an optical machine-readable representation of the device
identifier. In one exemplary scenario, the optical device identifier is a barcode which represents the
unique device identifier. The barcode may take on any form of barcode, such as a linear barcode or a
two dimensional bar code (e.g. a PDF 417 or compact PDF 417 barcode).
It would have been obvious to one of ordinary skill in the art at the time of the invention to
include the above limitations as taught by McCleland et al in the systems of Guinard et al, since the
claimed invention is merely a combination of old elements, and in the combination each element merely
would have performed the same function as it did separately, and one of ordinary skill in the art would
have recognized that the results of the combination were predictable.
As pr claim 11, Guinard et al does not disclose:, the item further comprising one or more
embossed patterns, the embossed patterns being machine readable, one or more of said trained
machine learning models further taking into account the embossed patterns in determining said result.
However, McCleland et al (US 20160381200 A1) discloses: (Abstract: An optical device identifier
being an optical machine-readable representation of the unique device identifier is generated and
output on a display screen of the electronic device for subsequent acquisition by a user device.
Outputting the optical device identifier on the display screen of the device for acquisition by a user
device may obviate the need for the device identifier to be otherwise displayed on the electronic device,
for example, by way of a printed or embossed label).
It would have been obvious to one of ordinary skill in the art at the time of the invention to
include the above limitations as taught by McCleland et al in the systems of Guinard et al, since the
claimed invention is merely a combination of old elements, and in the combination each element merely
would have performed the same function as it did separately, and one of ordinary skill in the art would
have recognized that the results of the combination were predictable.
Claim(s) 15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Guinard et al (US
20210142337 A1), and further in view of LILLY et al (KR 20180052626 A), and further in view of Aghakhani et al “Detecting Deceptive Reviews using Generative Adversarial Networks”, and further in view of DOLAN et al (US 20230123535 A1).
As per claim 15, Guinard et al does not disclose: wherein said transformer- based deep learning
model is a foundation model.
However, DOLAN et al (US 20230123535 A1) discloses: ([0018] Multimodal output generated by
an ML model according to aspects of the present disclosure may comprise natural language output
and/or programmatic output, among other examples. The multimodal output may be processed and
used to affect the state of an associated application, such as a video game application or other virtual
environment. For example, at least a part of the programmatic output may be executed or may be used
to call an application programming interface (API) of the application. A generative multimodal ML model
(also generally referred to herein as a multimodal ML model) used according to aspects described herein
may be a generative transformer model, in some examples. Example ML models include, but are not
limited to, the BigScience Large Open-science Open-access Multilingual Language Model (BLOOM),
DALL-E, DALL-E 2, or Jukebox. In some instances, explicit and/or implicit feedback may be processed to
improve the performance of multimodal machine learning model. In further examples, the generative
multimodal ML model is operable to generate virtual objects in the virtual environment, computer
executable code capable of generating, modifying, or controlling object or characters in the virtual
environment, or the like. That is, the generative multimodal model may also function as a code
generation model which generates executable code or programmatic content for the virtual
environment or associated application. In examples, the authoring environment may include multiple
machine learning models, e.g., a generative model, a code generation model, a text generation model, a
conversational model, a virtual object generation model, or the like. Alternatively, or additionally, the
authoring environment may include a foundational model).
It would have been obvious to one of ordinary skill in the art at the time of the invention to
include the above limitations as taught by DOLAN et al in the systems of Guinard et al, since the claimed
invention is merely a combination of old elements, and in the combination each element merely would
have performed the same function as it did separately, and one of ordinary skill in the art would have
recognized that the results of the combination were predictable.
Response to Arguments
Applicant's arguments filed 12/29/25 have been fully considered but they are not persuasive. With regard to the 101 rejection, Applicant amends the claims to recite that the claimed method is for “determining whether a physical item has been illegitimately produced” which according to Applicant is a technical improvement. However, Examiner respectfully disagrees. Although the claim relates to determining whether a physical item has been illegitimately produced, the claim is directed to collecting and analyzing information, then making a determination. The physical item is merely the subject of the analysis. Furthermore, the claim does not recite a technological improvement to image processing, machine learning or functionality.
Applicant further argues that the claim also recites a transformer-based deep learning model used in conjunction with other models which include a generative AI model, and information therefore changes over time. However, “Merely using a known machine learning technique to perform data analysis does not render a claim non-abstract.”-USPTOAI Subject Matter Eligibility Guidance (2024 Update). Furthermore, "[P]atents that do no more than claim the application of generic machine learning to new data environments, without disclosing improvements to the machine learning models to be applied, are patent ineligible under § 101." Recentive Analytics, Inc. v. Fox. Corp., Fed Cir. No. 2023-2437 (Apr. 18, 2025) (slip op. at 18). "Finally, the claimed methods are not rendered patent eligible by the fact that (using existing machine learning technology) they perform a task previously undertaken by humans with greater speed and efficiency than could previously be achieved." Recentive Analytics, Inc. v. Fox. Corp., Fed Cir. No. 2023-2437 (Apr. 18, 2025), slip op. at 15. "[T]he only thing the claims disclose about the use of machine learning is that machine learning is used in a new environment." Recentive Analytics, Inc. v. Fox. Corp., Fed Cir. No. 2023-2437 (Apr. 18, 2025), slip op. at 13. "The requirements that the machine learning model be 'iteratively trained' or dynamically adjusted in the Machine Learning Training patents do not represent a technological improvement." Recentive Analytics, Inc. v. Fox. Corp., Fed Cir. No. 2023-2437 (Apr. 18, 2025), slip op. at 12.
In addition, Applicant argues that claim 1 is amended to recite that the process allows for different kinds of illegitimate production of goods to be identified, which is a technical improvement. However, the ability to identify different kinds of illegitimate production merely reflects an improvement in the underlying abstract idea of classification and evaluation. Again, as disclosed in preceding paragraphs, the claim does not recite any technological improvement to image processing, machine learning, or computer functionality. The determination of multiple illegitimacy types is an abstract result.
Applicant’s arguments, see arguments/remarks, filed 12/29/25, with respect to the rejection(s) of claim(s) 1-9, 12-14, 16 rejected under 35 U.S.C. 103 as being unpatentable over Guinard et al (US 20210142337 A1), and further in view of LILLY et al (KR 20180052626 A), have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Aghakhani et al “Detecting Deceptive Reviews using Generative Adversarial Networks”.
Similarly, claim(s) 10-11 is/are now rejected under 35 U.S.C. 103 as being unpatentable over Guinard et al (US 20210142337 A1), and further in view of LILLY et al (KR 20180052626 A), and further in view of Aghakhani et al “Detecting Deceptive Reviews using Generative Adversarial Networks”, and further in view of McCleland et al (US 20160381200 A1).
Similarly, claim(s) 15 is/are now rejected under 35 U.S.C. 103 as being unpatentable over Guinard et al (US 20210142337 A1), and further in view of LILLY et al (KR 20180052626 A), and further in view of Aghakhani et al “Detecting Deceptive Reviews using Generative Adversarial Networks”, and further in view of DOLAN et al (US 20230123535 A1).
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Akiba Robinson whose telephone number is 571-272-6734 and email is Akiba.Robinsonboyce@USPTO.gov. The examiner can normally be reached on Monday-Thursday 6:30am-4:30pm.
If attempts to reach the Examiner by telephone are unsuccessful, the Examiner's supervisor, Resha Desai can be reached on 571-270-7792. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Any inquiry of a general nature or relating to the status of this application or proceeding should be directed to the receptionist whose telephone number is (703) 305-3900.
January 14, 2026
/Akiba K Robinson/
Primary Examiner, Art Unit