DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Acknowledgement is made of Applicant’s claim of priority from PCT Application No. PCT/US2023/011112, filed January 19, 2023.
Status of Claims
Claims 1, 3, 5-12 and 14-21 are pending. Claims 2, 4 and 13 have been cancelled. Claim 21 is newly added.
Response to Arguments
Applicant's arguments filed November 5, 2025 have been fully considered but they are not persuasive. Applicant argues that the Suplee reference is not sufficient to teach “causing the display to display a first dialing screen including a button to activate the camera to receive the image data from the camera”. Examiner respectfully disagrees. As described in the 35 USC 103 rejection below Suplee teaches a camera that requires a user input, such as the user selecting an icon on the touch screen (i.e., a button to activate the camera to receive the image data from the camera). Examiner asserts that the image capture is sufficient to teach “the first dialing screen” because as described, the processing is automatically initiated once the image is captured (see Para. [0017]). As shown in Paras. [0014] and [0015], the processing initiated by the image capture includes determines a function or application (e.g. calling a number, opening an internet browser, etc.) associated with the type of text and the user could be prompted via the display 204 with a message such as "Press 1 to dial the number, 2 to find directions, 3 to email, or 4 to save contact." Then, the application advances to dialing the number and displays the phone number on the screen (see Para. [0019] and Fig. 2C) (i.e., a second dialing screen including the phone number). Applicant is reminded that the Specification and Drawings of the application are not read into the claims. Under the broadest reasonable interpretation of “first dialing screen including a button to activate the camera” the image capture screen and subsequent processing of Suplee is sufficient in teaching this limitation. Thus, the 35 USC 102 and 103 rejection of the claims is upheld, and consequently, THIS ACTION IS FINAL.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1-5, 7, 11-14, 17 and 20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Suplee, III et al. (US 2013/03290023 A1).
Regarding claim 1, Suplee teaches a mobile device comprising:
a display (Para. [0023], the portable computing device has a display screen operable to display image content to one or more users or viewers of the device);
a non-transitory computer readable medium configured to store instructions thereon (Para. [0027], the device can include many types of memory, data storage or computer-readable storage media. Claims 22-25, a non-transitory computer-readable storage medium storing instructions);
a camera (Para. [0017], the image or image information is obtained from a camera application of the portable computing device); and
a processor connected to the non-transitory computer readable medium and the display (Para. [0027], the device includes at least one processor for executing instructions that can be stored in at least one memory device), wherein the processor is configured to execute the instructions for:
receiving image data from the camera (Para. [0013], upon obtaining an image and/or identifying one or more portions of the image having properties that indicate the presence of text, an application on the device automatically runs an optical character recognizing (OCR) algorithm to recognized the imaged text of the flyer);
determining a phone number based on the image data (Para. [0013], any identified strings are analyzed to further identify patterns that would indicate the presence of interested data objects or types, such as email addresses, URL/web addresses, phone numbers, and the like); and
in response to the phone number being determined, automatically causing the display to display the phone number, wherein the display is configured to receive instructions for contacting the displayed phone number (Para. [0015], Fig. 2, the user could be prompted via the display 204 with a message such as "Press 1 to dial the number, 2 to find directions, 3 to email, or 4 to save contact." Referring now to FIG. 2C, the user has either selected the option to call the phone number, or the user has assigned dialing a phone number as the priority operation, and the device 202 is shown calling the number),
wherein the processor is configured to further execute the instructions for:
causing the display to display a first dialing screen including a button to activate the camera to receive the image data from the camera (Para. [0014]; Fig. 2A, the device locates texts in a captured image of a business card, identifies the type or pattern of the text (e.g. an email address, phone number, URL etc.), determines a function or application (e.g. calling a number, opening an internet browser, etc.) associated with the type of text, and sends the application or function at least a relevant portion of the located text to perform an operation therewith. Para. [0015], the user could be prompted via the display 204 with a message such as "Press 1 to dial the number, 2 to find directions, 3 to email, or 4 to save contact." (i.e., first dialing screen) In one example, the text of the data types could be automatically populated into respective data field types. Para. [0017], the image or image information is obtained from a camera application of the portable computing device. In one instance, hovering the device over an object facing the camera will cause the camera to automatically capture at least one image or record a sequence of images. In another instance, the camera requires input from a user in order to capture the image, such as by the user selecting an icon on a touch screen, for example), and
in response to the phone number being determined based on the image data, automatically causing the display to display a second dialing screen including the phone number (Para. [0019], Fig. 2, as described above with respect to FIG. 2, the function or application could involve dialing a number when the text pattern indicates a phone number or opening an address book for saving contact information when the text pattern indicates a presence of a phone number and a physical address. Fig. 2C, dialing screen including the phone number).
Regarding claim 3, Suplee teaches the mobile device according to claim 1, and further teaches wherein in response to the camera capturing an object including text, the processor is configured to execute the instructions for converting the captured object into the image data (Para. [0017], the obtained image information is processed to locate at least one region having properties of a string of text or characters. In one instance, the processing is automatically initiated upon receiving image information).
Regarding claim 5, Suplee teaches the mobile device according to claim 3, and further teaches wherein in response to the captured object being converted to the image data, the processor is configured to execute the instructions for:
detecting the text in the image data (Para. [0017], the obtained image information is processed to locate at least one region having properties of a string of text or characters);
extracting numerical data from the detected text (Para. [0017], the text string is analyzed using an optical character recognition algorithm to recognize text in the text string. The OCR algorithm can include a machine vision algorithm and other image preprocessing techniques or algorithms. A text pattern (e.g. an email, phone number, URL etc.) is identified that corresponds to the recognized text);
determining the phone number based on the extracted numerical data (Para. [0019], a text pattern (e.g. an email, phone number, URL etc.) is identified that corresponds to the recognized text); and
automatically causing the display to display the phone number (Para. [0019], the function or application could involve dialing a number when the text pattern indicates a phone number or opening an address book for saving contact information when the text pattern indicates a presence of a phone number and a physical address).
Regarding claim 7, Suplee teaches the mobile device according to claim 1, and further teaches wherein the processor is configured to execute the instructions for extracting numerical data from the image data (Para. [0017], The obtained image information is processed to locate at least one region having properties of a string of text or characters. The text string is analyzed using an optical character recognition algorithm to recognize text in the text string. The OCR algorithm can include a machine vision algorithm and other image preprocessing techniques or algorithms. A text pattern (e.g. an email, phone number, URL etc.) is identified that corresponds to the recognized text).
Claim 11 recites a method with steps corresponding to the elements of the system recited in Claims 1. Therefore, the recited steps of this claim are mapped to the proposed reference in the same manner as the corresponding elements in its corresponding system claim.
Regarding claim 12, Suplee teaches the method according to claim 11, and further teaches wherein:
in response to the camera capturing an object including text, the method further comprises converting the captured object into the image data (Para. [0017], the obtained image information is processed to locate at least one region having properties of a string of text or characters. In one instance, the processing is automatically initiated upon receiving image information).
Regarding claim 14, Suplee teaches the method according to claim 12, further comprising:
detecting the text from the image data (Para. [0017], the obtained image information is processed to locate at least one region having properties of a string of text or characters);
extracting numerical data from the detected text (Para. [0017], the text string is analyzed using an optical character recognition algorithm to recognize text in the text string. The OCR algorithm can include a machine vision algorithm and other image preprocessing techniques or algorithms. A text pattern (e.g. an email, phone number, URL etc.) is identified that corresponds to the recognized text);
determining the phone number based on the extracted numerical data (Para. [0019], a text pattern (e.g. an email, phone number, URL etc.) is identified that corresponds to the recognized text); and
automatically displaying the phone number on the display (Para. [0019], the function or application could involve dialing a number when the text pattern indicates a phone number or opening an address book for saving contact information when the text pattern indicates a presence of a phone number and a physical address).
Regarding claim 17, Suplee teaches the method according to claim 11, further comprising: extracting numerical data from the image data (Para. [0017], The obtained image information is processed to locate at least one region having properties of a string of text or characters. The text string is analyzed using an optical character recognition algorithm to recognize text in the text string. The OCR algorithm can include a machine vision algorithm and other image preprocessing techniques or algorithms. A text pattern (e.g. an email, phone number, URL etc.) is identified that corresponds to the recognized text).
Claim 20 recites a computer-readable storage medium storing a program with instructions corresponding to the steps recited in Claim 11. Therefore, the recited programming instructions of this claim are mapped to the proposed reference in the same manner as the corresponding steps in its corresponding method claim. Finally, the Suplee reference discloses a non-transitory computer readable storage medium (Para. [0027], the device can include many types of memory, data storage or computer-readable storage media. Claims 22-25, a non-transitory computer-readable storage medium storing instructions).
Regarding claim 21, Suplee teaches the mobile device according to claim 1, wherein the first dialing screen further includes a call button (Para. [0015], the user could be prompted via the display 204 with a message such as "Press 1 to dial the number, 2 to find directions, 3 to email, or 4 to save contact." (i.e., press 1 is the call button)).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 6 and 15-16 are rejected under 35 U.S.C. 103 as being unpatentable over Suplee, III et al. (US 2013/03290023 A1) in view of Bray et al. (US 2012/0083294 A1).
Regarding claim 6, Suplee teaches the mobile device according to claim 1, and further teaches the processor is configured to execute the instructions for:
detecting text from the image data (Suplee, Para. [0017], the obtained image information is processed to locate at least one region having properties of a string of text or characters);
extracting numerical data from the detected text (Suplee, Para. [0017], the text string is analyzed using an optical character recognition algorithm to recognize text in the text string. The OCR algorithm can include a machine vision algorithm and other image preprocessing techniques or algorithms. A text pattern (e.g. an email, phone number, URL etc.) is identified that corresponds to the recognized text);
determining the phone number based on the extracted numerical data (Suplee, Para. [0017], the text string is analyzed using an optical character recognition algorithm to recognize text in the text string. The OCR algorithm can include a machine vision algorithm and other image preprocessing techniques or algorithms. A text pattern (e.g. an email, phone number, URL etc.) is identified that corresponds to the recognized text); and
automatically causing the display to display the phone number on the messaging user interface (Suplee, Para. [0019], the function or application could involve dialing a number when the text pattern indicates a phone number or opening an address book for saving contact information when the text pattern indicates a presence of a phone number and a physical address).
Although Suplee teaches extracting a phone number from a received image (Suplee, Para. [0017]), Suplee does not explicitly teach “the display is configured to display a messaging user interface” and “receiving the image data from an image on the messaging user interface”. However, in an analogous field of endeavor, Bray teaches performing contextual processing in an application of the system, the application may include one of a phone application, an SMS and MMS messaging application, a chat application, an email application, etc. (Bray, Para. [0004]). Image data is received by the data processing system. The image data may be received, for example, in an email or multimedia messaging service (MMS) message, or the image may be captured by a camera attached to the device. A text recognition module in the data processing system performs character recognition on the image data to identify textual information in the image and create a textual data stream. The textual data stream is provided to a data detection module which identifies the type of data (e.g., date, telephone number, email address, etc.) based on the structure and recognized patterns (Bray, Para. [0016]).
Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the device of Suplee with the teachings of Bray by including displaying a messaging user interface (i.e., SMS and MMS messaging application) and receiving the image data from an image on the messaging user interface. One having ordinary skill in the art would have been motivated to combine these references because doing so would allow for performing textual data extraction on images from messaging applications, as recognized by Bray. Thus, the claimed invention would have been obvious to one having ordinary skill in the art before the effective filing date.
Regarding claim 15, Suplee teaches the method according to claim 11, as described above.
Although Suplee teaches extracting a phone number from a received image (Suplee, Para. [0017]), Suplee does not explicitly teach “displaying an image on a messaging user interface, wherein the image data is received by detecting the image displayed on the messaging user interface”. However, in an analogous field of endeavor, Bray teaches performing contextual processing in an application of the system, the application may include one of a phone application, an SMS and MMS messaging application, a chat application, an email application, etc. (Bray, Para. [0004]). Image data is received by the data processing system. The image data may be received, for example, in an email or multimedia messaging service (MMS) message, or the image may be captured by a camera attached to the device. A text recognition module in the data processing system performs character recognition on the image data to identify textual information in the image and create a textual data stream. The textual data stream is provided to a data detection module which identifies the type of data (e.g., date, telephone number, email address, etc.) based on the structure and recognized patterns (Bray, Para. [0016]).
The proposed combination as well as the motivation for combining the Suplee and Bray references presented in the rejection of Claim 6, apply to Claim 15 and are incorporated herein by reference. Thus, the method recited in Claim 15 is met by Suplee in view of Bray.
Regarding claim 16, Suplee in view of Bray teaches the method according to claim 15, further comprising:
detecting the text from the image data (Suplee, Para. [0017], the obtained image information is processed to locate at least one region having properties of a string of text or characters);
extracting numerical data from the detected text (Suplee, Para. [0017], the text string is analyzed using an optical character recognition algorithm to recognize text in the text string. The OCR algorithm can include a machine vision algorithm and other image preprocessing techniques or algorithms. A text pattern (e.g. an email, phone number, URL etc.) is identified that corresponds to the recognized text);
determining the phone number based on the extracted numerical data (Suplee, Para. [0019], a text pattern (e.g. an email, phone number, URL etc.) is identified that corresponds to the recognized text); and
automatically displaying the phone number on the display (Suplee, Para. [0019], the function or application could involve dialing a number when the text pattern indicates a phone number or opening an address book for saving contact information when the text pattern indicates a presence of a phone number and a physical address).
Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over Suplee, III et al. (US 2013/03290023 A1) in view of Gray et al. (US 9,256,795 B1).
Regarding claim 8, Suplee teaches the mobile device according to claim 7, as described above.
Although Suplee teaches extracting a phone number from a received image (Suplee, Para. [0017]), Suplee does not explicitly teach “wherein the processor is configured to execute the instructions for validating the numerical data extracted from the image data based on a predetermined pattern to obtain one or more validated numbers”. However, in an analogous field of endeavor, Gray teaches a matching score for the isolated character string is determined for how closely it matches a pattern of a phone number, an email address, or a URL. If the matching score is greater than a threshold score, the character string is identified as being a phone number, an email address, or a URL, depending on the pattern (Gray, Col. 9, lines 5-39).
Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date to modify the device of Suplee with the teachings of Gray by including validating the numerical data (i.e., determining if matching score is greater than the threshold) based on a predetermined pattern to obtain the validated phone number. One having ordinary skill in the art would have been motivated to combine these references because doing so would allow for saving users time by automatically detecting phone numbers, as recognized by Gray. Thus, the claimed invention would have been obvious to one having ordinary skill in the art before the effective filing date.
Claims 9 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Suplee, III et al. (US 2013/03290023 A1) in view of Gray et al. (US 9,256,795 B1), as applied to claim 8, and further in view of Gosukonda et al. (US 2018/0024972 A1).
Regarding claim 9, Suplee in view of Gray teaches the mobile device according to claim 8, as described above.
Although Suplee in view of Gray teaches determining a user’s location from a global position system (GPS) (Suplee, Para. [0016]), they do not explicitly teach “wherein the processor is configured to execute the instructions for formatting the one or more validated numbers based on a location of the mobile device”. However, in an analogous field of endeavor, Gosukonda teaches monitoring one or more locations, such as the location of the touchscreen device, and uses the user location as part of determining a context to replace application text selection rules with user-defined context-aware text selection rules (Gosukonda, Para. [0073]). Phone number parsers may include rules for: determining country codes, whether the phone number is a landline phone number or a mobile phone number, and whether there are formatting conventions such as parentheses around an area code as was frequently done in the United States, spacing or dashes between different segments of a phone number. In some embodiments, a phone number parser knows how to distinguish between the format for an in country phone number and a phone number to be accessed from another country (Gosukonda, Para. [0092]).
Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the device of Suplee in view of Gray with the teachings of Gosukonda by including formatting numbers based on a location of a mobile device by replacing the formatting rules of a phone number parser based on the location of user. One having ordinary skill in the art would have been motivated to combine these references because doing so would allow for a device that knows how to distinguish between the format for different location phone numbers, as recognized by Gosukonda. Thus, the claimed invention would have been obvious to one having ordinary skill in the art before the effective filing date.
Regarding claim 18, Suplee teaches the method according to claim 17, as described above.
Although Suplee teaches extracting a phone number from a received image (Suplee, Para. [0017]), Suplee does not explicitly teach “validating the numerical data extracted from the image data based on a predetermined pattern to obtain one or more validated numbers”. However, in an analogous field of endeavor, Gray teaches a matching score for the isolated character string is determined for how closely it matches a pattern of a phone number, an email address, or a URL. If the matching score is greater than a threshold score, the character string is identified as being a phone number, an email address, or a URL, depending on the pattern (Gray, Col. 9, lines 5-39).
The proposed combination as well as the motivation for combining the Suplee and Gray references presented in the rejection of Claim 8, apply to Claim 18 and are incorporated herein by reference.
Although Suplee in view of Gray teaches determining a user’s location from a global position system (GPS) (Suplee, Para. [0016]), they do not explicitly teach “formatting the one or more validated numbers based on a location of the mobile device”. However, in an analogous field of endeavor, Gosukonda teaches monitoring one or more locations, such as the location of the touchscreen device, and uses the user location as part of determining a context to replace application text selection rules with user-defined context-aware text selection rules (Gosukonda, Para. [0073]). Phone number parsers may include rules for: determining country codes, whether the phone number is a landline phone number or a mobile phone number, and whether there are formatting conventions such as parentheses around an area code as was frequently done in the United States, spacing or dashes between different segments of a phone number. In some embodiments, a phone number parser knows how to distinguish between the format for an in country phone number and a phone number to be accessed from another country (Gosukonda, Para. [0092]).
The proposed combination as well as the motivation for combining the Suplee, Gray, and Gosukonda references presented in the rejection of Claim 9, apply to Claim 18 and are incorporated herein by reference. Thus, the method recited in claim 18 is met by Suplee in view of Gray further in view of Gosukonda.
Claims 10 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Suplee, III et al. (US 2013/03290023 A1) in view of Gray et al. (US 9,256,795 B1) further in view of Gosukonda et al. (US 2018/0024972 A1), as applied to claims 9 and 18 above, and further in view of Raymond Amand Lorie (US 6,577,755 B1).
Regarding claim 10, Suplee in view of Gray further in view of Gosukonda teaches the mobile device according to claim 9, as described above.
Although Suplee in view of Gray further in view of Gosukonda teaches a matching score to determine how closely a text string matches a phone number pattern (Gray, Col. 9, lines 5-39), they do not explicitly teach “wherein the processor is configured to execute the instructions for determining a phone number with a highest probability among the one or more validated numbers to determine the phone number displayed on the display”. However, in an analogous field of endeavor, Lorie teaches confidence or probability measurements corresponding with the recognized characters (Lorie, Col. 3, lines 38-45). For instance, the month digit was recognized as either a 2 or a 7, with 2 having a higher recognition confidence. The probabilities can be accumulated along the paths. The full set of solutions would yield 2/5/99, 2/5/94, 2/5/92, 7/5/99, 7/5/94, and 7/5/92. However, if we assume that we have no other a priori information on the dates, the best choice is obtained by simply picking up the best choice for each character, according to the confidence levels expressed in parentheses in FIG. 2, yielding 2/5/99. The same is done for the other valid syntaxes; global probabilities are used to choose the optimum (Lorie, Col. 8, lines 45-59).
Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the device of Suplee in view of Gray further in view of Gosukonda with the teachings of Lorie by including determining a probability for the recognized characters (i.e., validated numbers) and choosing the optimum number based on the probabilities. One having ordinary skill in the art would have been motivated to combine these references, because doing so would allow for determining the best recognition result based on confidence of correct detection, as recognized by Lorie. Thus, the claimed invention would have been obvious to one having ordinary skill in the art before the effective filing date.
Regarding claim 19, Suplee in view of Gray further in view of Gosukonda teaches the method according to claim 18, as described above.
Although Suplee in view of Gray further in view of Gosukonda teaches a matching score to determine how closely a text string matches a phone number pattern (Gray, Col. 9, lines 5-39), they do not explicitly teach “determining a phone number with a highest probability among the one or more validated numbers to determine the phone number displayed on the display”. However, in an analogous field of endeavor, Lorie teaches confidence or probability measurements corresponding with the recognized characters (Lorie, Col. 3, lines 38-45). For instance, the month digit was recognized as either a 2 or a 7, with 2 having a higher recognition confidence. The probabilities can be accumulated along the paths. The full set of solutions would yield 2/5/99, 2/5/94, 2/5/92, 7/5/99, 7/5/94, and 7/5/92. However, if we assume that we have no other a priori information on the dates, the best choice is obtained by simply picking up the best choice for each character, according to the confidence levels expressed in parentheses in FIG. 2, yielding 2/5/99. The same is done for the other valid syntaxes; global probabilities are used to choose the optimum (Lorie, Col. 8, lines 45-59).
The proposed combination as well as the motivation for combining the Suplee, Gray, Gosukonda, and Lorie references presented in the rejection of Claim 10, apply to Claim 19 and are incorporated herein by reference. Thus, the method recited in Claim 19 is met by Suplee in view of Gray further in view of Gosukonda and Lorie.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Emma Rose Goebel whose telephone number is (703)756-5582. The examiner can normally be reached Monday - Friday 7:30-5.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Amandeep Saini can be reached at (571) 272-3382. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Emma Rose Goebel/Examiner, Art Unit 2662 /AMANDEEP SAINI/Supervisory Patent Examiner, Art Unit 2662