DETAILED ACTION
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 1/26/2026 has been entered.
Claims 1-10 are presented for examination.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments-Drawings
Applicant's amendments filed have been fully considered. However issues remain. See below.
Response to Arguments - 35 USC § 102
Applicant’s arguments have been considered but are moot in view of the new grounds of rejection.
Drawings
The drawings are objected to because Figures 9, 10, and 12 contain text which is placed upon hatched or shaded surfaces and Figures 4B, 9, 10, and 12 contain text which is smaller than .32 cm. ( 1/8 inch); see 37 CFR § 1.84(p)(3). Corrected drawing sheets in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. The figure or figure number of an amended drawing should not be labeled as “amended.” If a drawing figure is to be canceled, the appropriate figure must be removed from the replacement sheet, and where necessary, the remaining figures must be renumbered and appropriate changes made to the brief description of the several views of the drawings for consistency. Additional replacement sheets may be necessary to show the renumbering of the remaining figures. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance.
Applicant is directed to 37 CFR § 1.84 - Standards for drawings, in particular:
(p) Numbers, letters, and reference characters:
(3) Numbers, letters, and reference characters must measure at least .32 cm. ( 1/8 inch) in height. They should not be placed in the drawing so as to interfere with its comprehension. Therefore, they should not cross or mingle with the lines. They should not be placed upon hatched or shaded surfaces. When necessary, such as indicating a surface or cross section, a reference character may be underlined and a blank space may be left in the hatching or shading where the character occurs so that it appears distinct.
Claim Rejections - 35 USC § 112
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112:
The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention.
Claims 1-10 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention.
Applicant has amended claims 1 and 10 to recite “wherein the machine learning model is a generative model trained on a plurality of training jewelry designs to synthesize novel jewelry design geometries by learning latent representations from the plurality of training jewelry designs” and “wherein the output jewelry model comprises design features not derivable from parametric adjustment of any single training jewelry design”. These features are not present in the disclosure and therefore constitute new matter. There is no description of latent representations. There is no description of a design feature that would not be “derivable” from parametric adjustments. If written description is present, specific page and line numbers are requested.
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-10 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Applicant has amended claims 1 and 10 to recite “wherein the machine learning model is a generative model trained on a plurality of training jewelry designs to synthesize novel jewelry design geometries by learning latent representations from the plurality of training jewelry designs” and “wherein the output jewelry model comprises design features not derivable from parametric adjustment of any single training jewelry design”. These features are not present in the disclosure and therefore constitute new matter. There is no description of latent representations. There is no description of a design feature that would not be “derivable” from parametric adjustments. Any application of prior art is the Examiner’s best interpretation of the claimed subject matter.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1, 3-6 and 8-10 are rejected under 35 U.S.C. 103 as being unpatentable over US 2015/0154678 (“Fonte”) in view of US 20220051479 A1 (“Agarwal”).
Regarding claims 1 and 10 (claim 1 is presented here as example), Fonte teaches:
A method for generating a custom jewelry design based on user preferences using machine learning (Fonte: Abstract, para [0028]), the method comprising:
displaying a graphical user interface in a first interface mode with visual elements for indicating user preferences (Fonte: para [0096], “the computer system that generates these custom products obtains information. The computer system obtains imaging data of the user, determines anatomic data, measurements from image data, and further optional user preferences and information such as the users likes or dislikes, ascertained from analysis of the users computer history. The computer system also accepts inputs from the user, where the user may specify certain preferences or directly control some aspects of the product customization”; para [0283], “FIG. 12 shows an example computer interface 1201”; para [0038], [0041]; Fig. 11);
capturing user input on the graphical user interface indicative of a user's preferences (Fonte: para [0096], “The computer system also accepts inputs from the user, where the user may specify certain preferences or directly control some aspects of the product customization”; para [0038], [0041]; Fig. 11; para [0177], para [0197]);
saving parameter values associated with the user's preferences to a user profile (Fonte: para [0096], “The computer system also accepts inputs from the user, where the user may specify certain preferences or directly control some aspects of the product customization”; para [0038], [0041]; Fig. 11; para [0177], para [0197]; para [0197], “For example, a user's image data analysis and a few basic answers to questions provides the following detailed profile of that user: a woman in mid-30s, dark medium-length hair, a square face, very small nose, slightly blue eyes, medium skin color, trendy fashion taste, white-collar profession, prefers bold fashion, wears glasses daily, and lives in an urban area. Each of these features may be associated with various eyewear preferences, and the combined information when classified by the machine learning method is able to recommend a set of eyewear that truly matches the user's preferences, even if she has not stated or does not know a priori her eyewear design preferences”);
providing the saved parameter values as input to a machine learning model configured to generate a new jewelry design model by synthesizing a combination of jewelry features corresponding to the parameter values,
wherein the machine learning model is a model trained on a plurality of training jewelry designs to synthesize novel jewelry design geometries by learning from the plurality of training designs (Fonte: para [0052], “learning from a user's interactions and preferences involving a learning machine or predictor or prognostication machine. The system and method include tracking the actions a user takes selecting, customizing, and previewing eyewear. The system and method further include machine learning analysis of the tracked actions in addition to the user provided image data, quantitative anatomic information, and other provided information to determine user preferences for custom eyewear properties. The system and method further include making recommendations to the user based on the learning analysis” para [0197], “the combined information when classified by the machine learning method is able to recommend a set of eyewear that truly matches the user's preferences, even if she has not stated or does not know a priori her eyewear design preferences”; par [0099], “The computer system as illustrated at 12, obtains optional user preferences and information which may be gleaned from a wide variety of sources. The computer system at 14 is provided with at least one configurable product model 13 to guide the computer system. Having analyzed all of its inputs, computer system 14 automatically outputs a new custom product model. The output of the computer system 14 is therefore provided to preview system 15 in which the computer system creates previews of custom products and the user. Then, as illustrated at 17, the computer system prepares product models and information for manufacturing the selected one-up, fully-custom product”; para [0318], “the systems and methods described herein may also be used in the customization, rendering, display, and manufacture of other custom products. Since the technology described applies to the use of custom image data, anatomic models, and product models that are built for customization, a multitude of other products is designed in a similar way, for example: Custom Jewelry (e.g. bracelets, necklaces, earrings, rings, nose-rings, nose studs, tongue rings/studs, etc)”; para [0194] “a training database of preferences associated with the various features. These preferences include but are not limited to: Eyewear style, Eyewear material, Eyewear shape, Eyewear color, Eyewear finish, Eyewear size including local size adjustments, including overall size and custom local adjustments such as width, thickness, etc., Eyewear position on face, and Lens size”);
obtaining, from the machine learning model, an output jewelry model generated based on the provided parameter values, where the output jewelry model comprises a digital representation of a new jewelry design not present in the training data (Fonte: para [0197], “the combined information when classified by the machine learning method is able to recommend a set of eyewear that truly matches the user's preferences, even if she has not stated or does not know a priori her eyewear design preferences”; para [0052], “a system and method are disclosed for learning from a user's interactions and preferences involving a learning machine or predictor or prognostication machine. The system and method include tracking the actions a user takes selecting, customizing, and previewing eyewear. The system and method further include machine learning analysis of the tracked actions in addition to the user provided image data, quantitative anatomic information, and other provided information to determine user preferences for custom eyewear properties. The system and method further include making recommendations to the user based on the learning analysis”; para [0034], “The subject methods offer high-fidelity renderings of one-up custom products. These are not standard previews of previously existing products…These previews involve more advanced techniques than previews of existing products because the product has not existed and prior photos, documentation or testing of the product representation does not exist yet. Everything must be generated or configured on-the-fly to enable a high quality preview of a one-up custom product that has not been built yet. The subject system is not merely rendering existing products (e.g. eyewear or parts of eyewear), but provides completely new custom designs from scratch” ), wherein the output jewelry model comprises design features not derivable from parametric adjustment of any single training jewelry design (Fonte: para [0034], “one-up custom product that has not been built yet. The subject system is not merely rendering existing products (e.g. eyewear or parts of eyewear), but provides completely new custom designs from scratch”); and
displaying the output jewelry model on the graphical user interface (Fonte: para [0176], “The configurable nature of the model would allow a multitude of materials, paints, colors, and surface finishes to be represented. Various rendering techniques known to those skilled in the art, such as ray tracing, are used to render the eyewear and lenses in the most photorealistic manner possible, with the intension to accurately represent and reproduce on the display the frame and lenses exactly as how they would appear when manufactured. Other optical interaction effects, such as shadows and reflections, can be displayed on the eyewear and on the 3D model of the user's face”; para [0318], “the systems and methods described herein may also be used in the customization, rendering, display, and manufacture of other custom products. Since the technology described applies to the use of custom image data, anatomic models, and product models that are built for customization, a multitude of other products is designed in a similar way, for example: Custom Jewelry (e.g. bracelets, necklaces, earrings, rings, nose-rings, nose studs, tongue rings/studs, etc),”).
Fonte does not teach but Agarwal does teach:
a generative model trained on a plurality of training jewelry designs to synthesize novel jewelry design geometries by learning latent representations from the plurality of training jewelry designs (Agarwal: para [0087], “the first ML model may include or correspond to multiple generative adversarial networks (GANs) configured to identify the one or more visual apparel design elements represented by the processed instruction data, as described with reference to FIGS. 1 and 3. ”; para [0026], “generating training data based on the user sentiment and/or public sentient to further refine the apparel design process. As used herein, “apparel” may include … jewelry (e.g., rings, necklaces, bracelets, earrings, broaches, and the like) or other customizable wearable goods”; para [0078], “With latent space additions, a latent vector z may be used to interpolate new instances of image representations”; para [0037], “ the training data 172 may include data indicating …named entities (e.g., labeled based on corresponding visual apparel design elements), images of real apparel design elements, images of fake apparel design elements (or improperly labeled images)”);
Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined Fonte (directed to custom jewelry design) and Agarwal (directed to) and arrived at custom jewelry design with matching items. One of ordinary skill in the art would have been motivated to make such a combination to a “method for supporting automated apparel design using machine learning” (Agarwal: para [0004]).
Regarding claim 3, Fonte and Agarwal teach:
The method of claim 1, wherein the graphical user interface is in a second interface mode and displays controls for jewelry design parameters and current values of the jewelry design parameters; wherein the captured user input indicates changing a value of one of the jewelry design parameters (Fonte: para [0040], “applying, using at least one computer system, a configurable product model to the image data or anatomic model of the user; previewing, using at least one computer system, images of the user with the configurable product model; optionally adjusting and updating the preview, using at least one computer system and/or user input, the configurable product model properties (eg, custom shape, size, dimensions, colors, finish, etc); preparing, using at least a computer system that executes instructions for manufacturing the custom product based on the previewed model; and manufacturing, using at least one computer system and manufacturing system, the new custom product”; para [0073], “FIG. 12 is a diagrammatic illustration of an example illustration showing the custom adjustment of the width of eyewear with a computer system an interface to be able to ascertain the product placement on the face of the individual as well as improvements that can be made at the time that improve representations of the individual”).
Regarding claim 4, Fonte and Agarwal teach:
The method of claim 1, wherein the graphical user interface is in a third interface mode and displays a drawing interface with two drawing panels, a first panel showing visual indicators of user input and a second panel showing the output jewelry model; and wherein the captured user input includes lines drawn by hand within the first drawing panel on the graphical user interface (Fonte: para [0283], “FIG. 12 shows an example computer interface 1201 for adjusting eyewear 1203 previewed on user 1202. The base designs consist of a variety of styles or materials, including but not limited to fully-rimmed, semi-rimmed, rimless, plastic, or metal. The controls include but are not limited to: control points on the eyewear that can be dragged or adjusted, sliders that are linked to certain features, directly drawing on the frame, and touch, gesture, mouse, or other interaction to stretch or push/pull features of the frame. In one embodiment, the controls allow the user to change certain limited features, including but not limited to the nose pad width, the temple length and height, and the width and height of the front of the eyewear. For example, if user 1202 in FIG. 12 has a narrow face, he adjusts the eyewear 1203 to make the overall size of the eyewear narrower. The user selects the eyewear 1203 with the computer system input device, and moves the edge of the eyewear inward toward his face as indicated by the arrow in FIG. 12. The resulting modified eyewear 1206 is shown in the updated preview 1205. The ability for the user to make such easy and custom adjustments to eyewear before purchasing represents a major change in the way the eyewear products are purchased from the current state of the art. The feedback may be nearly instantaneous, with the user seeing the rendered preview updated on the computer system display”).
Regarding claim 5, Fonte and Agarwal teach:
The method of claim 1, wherein the graphical user interface is in a third interface mode and displays a drawing interface with one drawing panel, and visual indicators of user input are overlaid over the displayed output jewelry model; and wherein the captured user input includes lines drawn by hand within the drawing panel on the graphical user interface (Fonte: para [0283], “FIG. 12 shows an example computer interface 1201 for adjusting eyewear 1203 previewed on user 1202. The base designs consist of a variety of styles or materials, including but not limited to fully-rimmed, semi-rimmed, rimless, plastic, or metal. The controls include but are not limited to: control points on the eyewear that can be dragged or adjusted, sliders that are linked to certain features, directly drawing on the frame, and touch, gesture, mouse, or other interaction to stretch or push/pull features of the frame. In one embodiment, the controls allow the user to change certain limited features, including but not limited to the nose pad width, the temple length and height, and the width and height of the front of the eyewear. For example, if user 1202 in FIG. 12 has a narrow face, he adjusts the eyewear 1203 to make the overall size of the eyewear narrower. The user selects the eyewear 1203 with the computer system input device, and moves the edge of the eyewear inward toward his face as indicated by the arrow in FIG. 12. The resulting modified eyewear 1206 is shown in the updated preview 1205. The ability for the user to make such easy and custom adjustments to eyewear before purchasing represents a major change in the way the eyewear products are purchased from the current state of the art. The feedback may be nearly instantaneous, with the user seeing the rendered preview updated on the computer system display”).
Regarding claim 6, Fonte and Agarwal teach:
The method of claims 2, 3, 4, or 5, further comprising capturing user input indicating to change the graphical user interface to a different interface mode; and changing the graphical user interface to display the indicated interface mode (Fonte: para [0250], “For example, as the user changes the view of one instance, the same change of view is applied to all instances”; para [0106], “As illustrated at 16, the computer system accepts user input to update, inform, or control the custom product model. The user, or others given permission by the user, may change the preview”; para [0216], “at least one still image is shown, such as a front and side view, or multiple views at set degrees around a vertical axis centered on the users face. In yet another embodiment, an augmented reality approach is used”).
Regarding claim 8, Fonte and Agarwal teach:
The method of claim 1, further comprising generated training data for the machine learning model by creating new combinations of parameter values (Fonte: para [0175], “FIG. 29 illustrates an example of customization achieved with configurable product model; in particular, the ability to combine various parameters to refine and customize a product model. An eyewear model 2900 is configured to the 16 variations in the illustration. The 4 columns 2902 illustrate example configurations of the eyewear lens width 2903 and height 2904. The 4 rows 2901 illustrate the combinations of varying parameters for nose bridge width 2905, the distance 2906 between the temples where they contact the ears, the height 2907 from the front frame to the ears, and other subtle changes. Key features such as the material thickness 2908 and the hinge size and location 2909 remain unchanged. The parametric configuration enables the eyewear design to be highly configurable while remaining manufacturable. A manufacturer may use 1 hinge and 1 material thickness for all these designs and more, yet still allow massive customization of the underlying shape and size. Models 2900 and 2910 are quite distinct and they would traditional require different mass produced products. It would be completely impractical to offer this level of variation to customers with traditional mass-produced products, requiring thousands, millions, or more components to be designs and stocked. A configurable model with the rest of the method and system described herein allows one base model to be configured in all the configurations illustrated in FIG. 29, so one product can be custom tailored to an individual customer and then produced. It should be noted that these 16 variations represent an extremely small subset of the total potential variation of the design; there are thousands, millions, or infinite variation possible by interpolating between the examples shown, extrapolating beyond, and configuring other parameters not shown in the illustration. For example, if a configurable model has 10 parameters that can be altered; each parameter has 20 increments (which could also be infinite) such as distances of 2 mm, 4 mm, 6 mm, and so on; and the model is available in 20 colors and 3 finishes; then the total combinations of configurations for that one model would be 6.times.10.sup.21, or six sextillion, which is 6000 multiplied by 1 billion multiplied by 1 billion”; para [0179], “the computer system optimizes the fit and style based on other techniques, such as machine learning or analytic equations. d) The computer system updates the configurable product model 3003 with new parameters”).
Regarding claim 9, Fonte and Agarwal teach:
The method of claim 1, wherein capturing user input indicative of a user’s preferences is performed on a client device and providing the saved parameter values to a machine learning model as input and obtaining an output jewelry model is performed on a server system (Fonte: para [0038]; para [0197], “the combined information when classified by the machine learning method is able to recommend a set of eyewear that truly matches the user's preferences, even if she has not stated or does not know a priori her eyewear design preferences”; para [0052], “a system and method are disclosed for learning from a user's interactions and preferences involving a learning machine or predictor or prognostication machine. The system and method include tracking the actions a user takes selecting, customizing, and previewing eyewear. The system and method further include machine learning analysis of the tracked actions in addition to the user provided image data, quantitative anatomic information, and other provided information to determine user preferences for custom eyewear properties. The system and method further include making recommendations to the user based on the learning analysis”; par [0099], “The computer system as illustrated at 12, obtains optional user preferences and information which may be gleaned from a wide variety of sources. The computer system at 14 is provided with at least one configurable product model 13 to guide the computer system. Having analyzed all of its inputs, computer system 14 automatically outputs a new custom product model. The output of the computer system 14 is therefore provided to preview system 15 in which the computer system creates previews of custom products and the user. Then, as illustrated at 17, the computer system prepares product models and information for manufacturing the selected one-up, fully-custom product”; para [0318], “the systems and methods described herein may also be used in the customization, rendering, display, and manufacture of other custom products. Since the technology described applies to the use of custom image data, anatomic models, and product models that are built for customization, a multitude of other products is designed in a similar way, for example: Custom Jewelry (e.g. bracelets, necklaces, earrings, rings, nose-rings, nose studs, tongue rings/studs, etc),”).
Claims 2 and 7 are rejected under 35 U.S.C. 103 as being unpatentable over by US 2015/0154678 (“Fonte”) in view of US 20220051479 A1 (“Agarwal”), further in view of WO 2010/141637 (“Yu”).
Regarding claim 2, Fonte and Agarwal do not teach but Yu does teach:
The method of claim 1, wherein the graphical user interface is in a first interface mode and displays an example piece of jewelry and requests a positive or negative preference; and where the captured user input indicates a positive preference (Yu: para [0039], “The user can provide input through the interface 110 that indicates (i) the users like or dislike of a particular fashion product or ensemble; (ii) the user's preference of one fashion product over another; and/or (iii) a rating or feedback that indicates the level of the user’s like or dislike for the fashion product. The visual aid component 140 present a set of visuals 152 that prompt the user to enter a response that indicates the users visual preference for the fashion genre depicted by that visual Still further, as described with an embodiment of FIG. 2 or FIG. 3A, the visuals may be presented to the user in a quiz or game fashion. In the quiz or game fashion, the user is shown panels that individually depict competing fashion products of different genres. The user can respond to each panel by indicating their preference, or like dislike, a one fashion product over at the other end of panel”; para [0023], “A fashion product includes, for example, clothing, accessories and apparel. Specific examples include…jewelry (e.g. watches, earrings, necklaces)”).
Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined Fonte and Agarwal (directed to custom jewelry design) and Yu (directed to indicating preference) and arrived at custom jewelry design with indicating user preference. One of ordinary skill in the art would have been motivated to make such a combination to “provide a computer implemented method or system in which a user's genre preference to style or fashion can be determined programmatically” (Yu: para [0016]).
Regarding claim 7, Fonte and Agarwal do not teach but Yu does teach:
The method of claim 1, further comprising determining matching items from a jewelry and accessories database to suggest pairing with the output jewelry model and displaying at least some of the matching items on the graphical user interface (Yu: para [0065], “an online commerce environment (such as implemented by a system of FIG. 1) implements a recommendation engine to recommend additional clothing, apparel, or accessories. Such recommendations may be made to, for example, provide a fashion ensemble
or matching set of clothing/apparel”; para [0074], “product recommendations are made
by (i) identifying predicted product genres of products (as described with FIG. 4), (ii) identifying a given user's genre or style preference for clothing and apparel (as described with an embodiment of FIG. 2); and (iii) matching product to user using (i) and (ii)”).
Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined Fonte and Agarwal (directed to custom jewelry design) and Yu (directed to indicating matching items) and arrived at custom jewelry design with matching items. One of ordinary skill in the art would have been motivated to make such a combination to “provide a computer implemented method or system in which a user's genre preference to style or fashion can be determined programmatically” (Yu: para [0016]).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to NITHYA J. MOLL whose telephone number is (571)270-1003. The examiner can normally be reached Monday-Friday 10am-6pm EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Rehana Perveen can be reached at 571-272-3676. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/NITHYA J. MOLL/Primary Examiner, Art Unit 2189