DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Receipt is acknowledged that application claims priority to foreign application with application number JP2023-148668 dated 13 September 2023. Copies of certified papers required by 37 CFR 1.55 have been received. Priority is acknowledged under 35 USC 119(e) and 37 CFR 1.78.
Information Disclosure Statement
The IDS dated 14 March 2024 has been considered and placed in the application file.
Specification - Title
The title of the invention is not descriptive. A new title is required that is clearly indicative of the invention to which the claims are directed.
The following title is suggested: Increasing recognition of circuit board elements through recognizing elements first.
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f), is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f):
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f). The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f), is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f), because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are:
“an image generation unit configured to” in claim 1; and
“a determination unit configured to” in claim 1.
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f), they are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f), applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f).
Claim Interpretation
Under MPEP 2143.03, "All words in a claim must be considered in judging the patentability of that claim against the prior art." In re Wilson, 424 F.2d 1382, 1385, 165 USPQ 494, 496 (CCPA 1970). As a general matter, the grammar and ordinary meaning of terms as understood by one having ordinary skill in the art used in a claim will dictate whether, and to what extent, the language limits the claim scope. Language that suggests or makes a feature or step optional but does not require that feature or step does not limit the scope of a claim under the broadest reasonable claim interpretation. In addition, when a claim requires selection of an element from a list of alternatives, the prior art teaches the element if one of the alternatives is taught by the prior art. See, e.g., Fresenius USA, Inc. v. Baxter Int’l, Inc., 582 F.3d 1288, 1298, 92 USPQ2d 1163, 1171 (Fed. Cir. 2009).
Claims 2, 4, 5, 9 recite “or” or “at least one of.” Since “at least one of” and “or” are disjunctive, any one of the elements found in the prior art is sufficient to reject the claim. While citations have been provided for completeness and rapid prosecution, only one element is required. Because, on balance, it appears the disjunctive interpretation enjoys the most specification support and for that reason the disjunctive interpretation (one of A, B OR C) is being adopted for the purposes of this Office Action. Applicant’s comments and/or amendments relating to this issue are invited to clarify the claim language and the prosecution history.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
Claims 1-9 are rejected under 35 U.S.C. 112(b), as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor regards as the invention.
According to MPEP 2143.03 (I), “If a claim is subject to more than one interpretation, at least one of which would render the claim unpatentable over the prior art, the examiner should reject the claim as indefinite under 35 U.S.C. 112(b) or pre-AIA 35 U.S.C. 112, second paragraph (see MPEP § 2175) and should reject the claim over the prior art based on the interpretation of the claim that renders the prior art applicable. (Ex parte Ionescu, 222 USPQ 537 (Bd. Pat. App. & Inter. 1984)”
Claims 1 and 9 recite “to apply a color or a pattern corresponding to color information.” It is unclear how objects are recognized (what are the edges and limits) to have the color applied, and how the colors or patterns are determined, and how “types of components” are recognized. However, for searching for limitations, the interpretation “using a layered neural network” has been used.
1st Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-8 are rejected under 35 U.S.C. 101 because the claimed invention is does not fit into one of the four categories of subject matter Congress deemed to be appropriate subject matter for a patent: processes, machines, manufactures and compositions of matter. If a claim covers material not found in any of the four statutory categories, that claim falls outside the plainly expressed scope of § 101 even if the subject matter is otherwise new and useful." In re Nuijten, 500 F.3d 1346, 1354, 84 USPQ2d 1495, 1500 (Fed. Cir. 2007).
A machine is a "concrete thing, consisting of parts, or of certain devices and combination of devices." Digitech, 758 F.3d at 1348-49, 111 USPQ2d at 1719 (quoting Burr v. Duryee, 68 U.S. 531, 570, 17 L. Ed. 650, 657 (1863)). This category "includes every mechanical device or combination of mechanical powers and devices to perform some function and produce a certain effect or result." Nuijten, 500 F.3d at 1355, 84 USPQ2d at 1501 (quoting Corning v. Burden, 56 U.S. 252, 267, 14 L. Ed. 683, 690 (1854)).
Claim 1 is not a machine even as it claims “An information processing apparatus.” Further, the claim lacks parts or combination of devices, instead claiming several “units configured to”
As the courts' definitions of machines, manufactures and compositions of matter indicate, a product must have a physical or tangible form in order to fall within one of these statutory categories. Digitech, 758 F.3d at 1348, 111 USPQ2d at 1719. Thus, the Federal Circuit has held that a product claim to an intangible collection of information, even if created by human effort, does not fall within any statutory category. Digitech, 758 F.3d at 1350, 111 USPQ2d at 1720 (claimed "device profile" comprising two sets of data did not meet any of the categories because it was neither a process nor a tangible product). Similarly, software expressed as code or a set of instructions detached from any medium is an idea without physical embodiment. See Microsoft Corp. v. AT&T Corp., 550 U.S. 437, 449, 82 USPQ2d 1400, 1407 (2007); see also Benson, 409 U.S. 67, 175 USPQ2d 675 (An "idea" is not patent eligible).
Dependent claims 2-8 are also rejected as depending on claim 1, also reciting a “An information processing apparatus” embodying functional descriptive material without adding sufficient form to qualify as a statutory category.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-9 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. All of the claims are claimed as method claims (9) or apparatus/machine claims (1-8) under (Step 1), but under Step 2A all of these claims recite abstract ideas and specifically mental processes—concepts performed in the human mind including observation, evaluation, judgement and opinion which are generally described as a human visually observing a label to judge the locations and dimensions of empty regions in order to insert content into these empty regions; furthermore these mental processes are more particularly:
Recited in claim 1 as:
determine appropriateness of a design shown in the design information…
Recited in claim 9 as:
determining appropriateness of a design shown in the design information…
It is noted that the above analysis is according to the 2019 Revised Patent Subject Matter Eligibility Guidance published in the Federal Register (84 FR 50) on January 7, 2019 and MPEP 2106.04(a)(2)(III).
Consider also that “If a claim recites a limitation that can practically be performed in the human mind, with or without the use of a physical aid such as pen and paper, the limitation falls within the mental processes grouping, and the claim recites an abstract idea” as per MPEP 2106.04(a)(2)(III)(B). See also footnotes 14 and 15 of the Federal Register Notice. As detailed above, the steps of determining, etc. may be practically performed in the human mind with the use of a physical aid such as a pen and paper (marking the label on the package with a pen).
Under Step 2B, this judicial exception is not integrated into a practical application because each of claims 1-9 do not recite additional elements that integrate the exception into a practical application. The only additional elements {a cause identification unit (claim2), a learning model generation unit (claim 8)} are recited at a high level of generality and merely equate to “apply it” or otherwise merely uses a generic computer as a tool to perform an abstract which are not indicative of integration into a practical application as per MPEP 2106.05(f). See also MPEP 2106.04(a)(2)(III) with respect to Mental Processes: “Nor do the courts distinguish between claims that recite mental processes performed by humans and claims that recite mental processes performed on a computer”. See also MPEP 2106.04(a)(2)(III)(C)(3) Using a computer as tool to perform a mental process and MPEP 2106.04(a)(2)(III)(D) as well as the case law cited therein.
In other words, the additional elements and/or are recited at a high level of generality that does not amount to significantly more and/ such that they could practically be performed in the human mind.
For all of the above reasons, taken alone or in combination, claims 1-9 recite a non-statutory mental process.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1-9 are rejected under 35 U.S.C. 103 as obvious over US Patent Publication 2024 0420338 A1, (Ono) in view of US Patent Publication 2021 0142456 A1, (Varga et al.).
Claim 1
[AltContent: textbox (Ono, Fig. 2, showing a system using color to assist component identification.)]
PNG
media_image1.png
340
312
media_image1.png
Greyscale
Regarding Claim 1, Ono teaches an information processing apparatus relating to soldering of a part onto a substrate ("a component inspection method including: imaging an upper surface of a board on which a component is mounted," paragraph [0006]), comprising:
an image generation unit configured to generate input image data showing a pre-reflow state based on soldering-related design information ("an imaging section configured to image an upper surface of a board on which a component is mounted as a color image," paragraph [0008]), and to apply a color or a pattern corresponding to color information set for each of a plurality of types of components shown in the input image data to each of the plurality of types of components based on the color information ("execute component inspection processing of determining whether the component is mounted at a correct position by performing edge detection on the color image subjected to the emphasis processing," paragraph [0008]); and
a determination unit configured to determine appropriateness of a design shown in the design of a post-reflow inspection ("execute component inspection processing of determining whether the component is mounted at a correct position by performing edge detection on the color image subjected to the emphasis processing," paragraph [0009] and "the component inspection may be executed after all the components to be mounted in the own device are mounted," paragraph [0045]) in response to inputting of image data that is based on a pre-reflow image by inputting, to the, the input image data in which the color or the pattern corresponding to the color information is applied to each of the plurality of types of components ("emphasis processing is processing of emphasizing the contrast between the region of component P and the background region in color image Iml. Specifically, CPU 41 multiplies the luminance values of the R component, the G component, and the B component of the specific pixel extracted in Sl40 among the luminance values of the R component, the G component, and the B component of each pixel stored in storage 43 in Sl30 by n to generate emphasized color image Im2," paragraph [0030] where emphasis processing teaches applying color information).
Ono does not explicitly teach all of machine learning models.
[AltContent: textbox (Varga et al. Fig. 4, showing a machine learning system using multiple data sources.)]
PNG
media_image2.png
627
446
media_image2.png
Greyscale
However, Varga et al. teach information using a machine learning model configured to output an inspection result ("A stacked ML system might also include an ML classifier that looks at the component shape and identifying features (text, barcodes, color bands, etc.) to make sure that the correct components are installed at the correct locations and orientations," paragraph [0033], and "The system, in one embodiment, may use CAD files to create synthetic images of PCBA's rendered from CAD data. The system may create synthetic images of “good" and "bad" boards, with appropriate tags, and use these synthetic images to train ML," paragraph [0042] and "The output of the camera-based inspection machine 210 is either a "pass" indicating that the board has no detected classification rule violations (errors), or a "fail" indicating that the system has at least one classification rule violation (error)." paragraph [0051])
Therefore, taking the teachings of Ono et al. and Varga et al. as a whole, it would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to modify “Component Inspection Method and Component Inspection Device” as taught by Ono to use “Image analysis System for Testing in Manufacturing” as taught by Varga et al. The suggestion/motivation for doing so would have been that, “It will, however, be evident that various modifications and changes may be made thereto without departing from the broader spirit and scope of the invention” as noted by the Varga et al. disclosure in paragraph [0142], which also motivates combination because the combination would predictably have a higher efficiency or use new technology as there is a reasonable expectation that a neural network would be able to use the color information; and/or because doing so merely combines prior art elements according to known methods to yield predictable results.
The rejection of apparatus claim 1 above applies mutatis mutandis to the corresponding limitations of method claim 9 while noting that the rejection above cites to both device and method disclosures. Claim 9 is mapped below for clarity of the record and to specify any new limitations not included in claim 1.
Claim 2
Regarding claim 2, Ono teaches the information processing apparatus according to claim 1, as noted above.
Ono does not explicitly teach all of machine learning models.
However, Varga et al. teach further comprising:
a cause identification unit configured, if the inspection result of the post-reflow inspection output from the machine learning model in response to the inputting of the input image data to the machine learning model indicates defectiveness, to identify a cause of the defectiveness indicated by the inspection result ("At block 870, the system is re-trained, based on the operator identification and final testing results," paragraph [0119]) in at least one of: the input image data ("For example, if an element is replaced, the system may indicate that as an anomaly," paragraph [0119] which teaches that image data may show the defect as a new element); the design information used in generating the input image data ("At block 745, the element in the region of interest is analyzed. In one embodiment, the inspection process described above with respect to FIGS. 5 and 6 may be used, to classify the element as either good, bad, or do not know," paragraph [0104] where block 745 is noted as evaluating CAD information); and data generated in a process of generating the input image data from the design information.
Ono and Varga et al. are combined as per claim 1.
Claim 3
Regarding claim 3, Ono teaches the information processing apparatus according to claim 1, further comprising:
a color information setting unit configured to set the color information for each of the plurality of types of components so that the plurality of types of components are distinguished from one another by the color information, and that each of the plurality of types of components is identified by the color information ("Then, CPU 41 reads a luminance value upper limit and a tolerance of an R component, a G component, and a B component corresponding to the color of component P set as the inspection target in Sll0 from luminance value upper limit data 45 as illustrated in FIG. 4 (S120)," paragraph [0028] where luminance values teach colors of components).
Claim 4
Regarding claim 4, Ono teaches the information processing apparatus according to claim 1, wherein
the image generation unit generates an outline element of each of the plurality of types of components based on at least the soldering-related design information ("Specifically, CPU 41 first sets the outline (outer shape) of the inspection target component based on edge C of component P," paragraph [0032]), and
the image generation unit applies, in generating of the input image data, the color or the pattern corresponding to the color information set for each of the plurality of types of components to the generated outline element ("emphasis processing is processing of emphasizing the contrast between the region of component P and the background region in color image Iml. Specifically, CPU 41 multiplies the luminance values of the R component, the G component, and the B component of the specific pixel extracted in Sl40 among the luminance values of the R component, the G component, and the B component of each pixel stored in storage 43 in Sl30 by n to generate emphasized color image Im2," paragraph [0030] where emphasis processing teaches applying color information).
Claim 5
Regarding claim 5, Ono teaches the information processing apparatus according to claim 4, as noted above.
Ono does not explicitly teach all of machine learning models.
However, Varga et al. teach wherein
the image generation unit generates the input image data using, as the soldering-related design information, design information of the substrate ("Find solder joints (using CAD or visual)," paragraph [0036]), design information of a metal mask used in mounting of solder onto the substrate ("Inspect the board itself: type, orientation, required features such as holes, bar codes, solder mask, pads, etc. etc.," paragraph [0041]), and design information of the part ("the evaluation component shape independent is useful because many components come in a variety of sizes and it is quite frequent that in a single design multiple elements have multiple size configurations," paragraph [0034]), and
the image generation unit generates the outline element of each of the plurality of types of components in such a manner that an outline element of each of one or more types of components configuring the substrate is generated based on at least the design information of the substrate ("A stacked ML system might also include an ML classifier that looks at the component shape and identifying features (text, barcodes, color bands, etc.) to make sure that the correct components are installed at the correct locations and orientations," paragraph [0033]), an outline element of the solder is generated based on at least the design information of the metal mask ("For example, for solder evaluations, an ML system may focus solely on solder joints between component leads(wires) and PCB solder pads," paragraph [0033] where PCB solder pads are metal masks), and an outline element of the part is generated based on at least the design information of the part("A stacked ML system might also include an ML classifier that looks at the component shape," paragraph [0033]).
Ono and Varga et al. are combined as per claim 1.
Claim 6
Regarding claim 6, Ono teaches the information processing apparatus according to claim 5, wherein
the image generation unit generates the outline element of the solder based on relationship data indicating a relationship between an opening of the metal mask and a shape of the solder mounted on the substrate, in addition to the design information of the metal mask ("Specifically, CPU 41 first sets the outline (outer shape) of the inspection target component based on edge C of component P," paragraph [0032]).
Claim 7
Regarding claim 7, Ono teaches the information processing apparatus according to claim 1, wherein the image generation unit generates, as an image showing the pre-reflow state in the input image data, more than one of three types of images including: an image of only the substrate ("Component mounter 10 includes a mark camera that images a color image of board S from above," paragraph [0022] where the board is a substrate); an image of the substrate on which only solder is mounted; and an image of the substrate on which the solder and the part are mounted ("control device 40 executes component inspection for determining whether each component on board S is mounted at a correct position based on the color image," paragraph [0024]).
Claim 8
Regarding claim 8, Ono teaches the information processing apparatus according to claim 1, as noted above.
Ono does not explicitly teach all of machine learning models.
However, Varga et al. teach further comprising:
a learning model generation unit configured to train a model through deep learning using learning data in which image data that is based on an image showing a pre-reflow state is associated with an inspection result of a post-reflow inspection that has been actually performed ("At block 830, the CNN system is trained on the training images," paragraph [0115] where a CNN is a model).
Ono and Varga et al. are combined as per claim 1.
Claim 9
Regarding claim 9, Ono teaches an information processing method relating to soldering of a part onto a substrate("a component inspection method including: imaging an upper surface of a board on which a component is mounted," paragraph [0006]), comprising:
generating input image data showing a pre-reflow state based on soldering-related design information("an imaging section configured to image an upper surface of a board on which a component is mounted as a color image," paragraph [0008]), and applying a color or a pattern corresponding to color information set for each of a plurality of types of components shown in the input image data to each of the plurality of types of components based on the color information ("execute component inspection processing of determining whether the component is mounted at a correct position by performing edge detection on the color image subjected to the emphasis processing," paragraph [0008]); and
determining appropriateness of a design shown in the design information of a post-reflow inspection ("execute component inspection processing of determining whether the component is mounted at a correct position by performing edge detection on the color image subjected to the emphasis processing," paragraph [0009] and "the component inspection may be executed after all the components to be mounted in the own device are mounted," paragraph [0045]) in response to inputting of image data that is based on a pre-reflow image by inputting, to the machine learning model, the input image data in which the color or the pattern corresponding to the color information is applied to each of the plurality of types of components ("emphasis processing is processing of emphasizing the contrast between the region of component P and the background region in color image Iml. Specifically, CPU 41 multiplies the luminance values of the R component, the G component, and the B component of the specific pixel extracted in Sl40 among the luminance values of the R component, the G component, and the B component of each pixel stored in storage 43 in Sl30 by n to generate emphasized color image Im2," paragraph [0030] where emphasis processing teaches applying color information).
Ono does not explicitly teach all of machine learning models.
However, Varga et al. teach using a machine learning model configured to output an inspection result ("A stacked ML system might also include an ML classifier that looks at the component shape and identifying features (text, barcodes, color bands, etc.) to make sure that the correct components are installed at the correct locations and orientations," paragraph [0033], and "The system, in one embodiment, may use CAD files to create synthetic images of PCBA's rendered from CAD data. The system may create synthetic images of “good" and "bad" boards, with appropriate tags, and use these synthetic images to train ML," paragraph [0042] and "The output of the camera-based inspection machine 210 is either a "pass" indicating that the board has no detected classification rule violations (errors), or a "fail" indicating that the system has at least one classification rule violation (error)." paragraph [0051])
Ono and Varga et al. are combined as per claim 1.
Reference Cited
The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure.
US Patent Publication 2019 0364707 A1 to Futamura et al. discloses a component mounting machine that mounts an electronic component on solder that is printed on a substrate by a solder printing machine, and that inspects the solder and a thermosetting adhesive applied on the substrate, the substrate inspection device including: an irradiator that irradiates the solder and the adhesive with light; an imaging device that takes an image of the irradiated solder and the irradiated adhesive; and a processor that: generates actual solder position information of a solder group that the electronic component is mounted on based on the image, wherein the solder group includes two or more solders; generates, based on design data or manufacturing data, ideal solder inspection reference information indicating a reference inspection position and/or a reference inspection range of the solder included in the solder group.
US Patent Publication 2023 0232603 A1 to Kikuchi et al. discloses a solder printing inspection device includes: an illumination device that irradiates, with a predetermined light, a printed circuit board on which a solder paste is printed; an imaging device that takes an image of the printed circuit board irradiated with the predetermined light and obtains image data; and a control device that: based on the image data, obtain three-dimensional measurement data of the solder paste printed on the printed circuit board, based on the three-dimensional measurement data, extracts upper portion shape data of an upper portion of the solder paste, the upper portion having a height equal to or higher than a predetermined height, and compares the upper portion shape data with a predetermined criterion.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to HEATH E WELLS whose telephone number is (703)756-4696. The examiner can normally be reached Monday-Friday 8:00-4:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ms. Jennifer Mehmood can be reached on 571-272-2976. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Heath E. Wells/Examiner, Art Unit 2664
Date: 7 January 2026