DETAILED ACTION
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
This action is responsive to the Request for Continued Examination filed on 03/16/2026.
Claims 1-11 and 13-21 are pending in the case.
No further claims have been cancelled.
Claim 21 has been added.
Claims 1, 11 and 13 are independent claims.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claim(s) 1, 5-11, 13, 14 and 18-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Sue Waters (“Identifying items using Google Lens”, published 09/10/2019, hereinafter “Waters”) in view of Gao et al. (US 2022/0329908 A1, effectively filed on 05/19/2021, hereinafter “Gao”).
Independent Claims 1, 11 and 13:
Waters discloses an apparatus comprising:
at least a processor; and at least one memory [non-transitory computer-readable storage medium] communicatively coupled to the at least one processor and storing instructions that upon execution by the at least one processor cause the apparatus to perform a method comprising (The Google lens feature is available through the Google Photos application and as can be seen via Figs. 1-21 of Waters, the Google Photos application is accessed via a smartphone device. Although there is no explicit discussion of a processor, memory and stored instructions, such components are necessary to provide the Google photos application functionality demonstrated in the figures and are typical components of a smartphone device.):
displaying an object recognition component on a first page (A google lens icon (object recognition component) is displayed on a first page of the google photos application, Waters: Figs. 1-5, page 13 timestamp 0:00-0:20.);
jumping from the first page to a second page in response to detecting a trigger signal to the object recognition component (Waters: Figs. 6-12, page 13 timestamp 0:15-0:31.);
displaying a scan area on the second page to recognize an object in the scan area (When the user selects the google lens icon, a page is displayed where an animation of dots floating around the screen is rendered while the photo is being analyzed for object recognition (scan area), Waters: Figs. 6-12, page 13 timestamp 0:15-0:31.);
displaying, in response to recognizing the object in the scan area, a result display component the recognized object on the second page (When the object is recognized a related results section is displayed, Waters: Figs. 6-12, page 13 timestamp 0:15-0:31.); and
jumping from the second page to a third page in response to detecting a trigger signal to the result display component, wherein a content of the third page is related to an object corresponding to the result display component (The user can select a result to display additional information regarding the result, Waters: Figs. 13-18, page 13 timestamp 0:31-0:42.).
Waters does not appear to expressly teach a method, apparatus and medium wherein the result display component corresponds to a quantity of the recognized object.
However, Gao teaches a method, apparatus and medium wherein the result display component corresponds to a quantity of the recognized object (The result area displays results that corresponds to an amount of recognized faces (quantity of the recognized object) that surpass a similarity threshold, Gao, Figs. 6a-8a, ¶ [0052]-[0061]).
Accordingly, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the method, apparatus and medium of Waters wherein the result display component corresponds to a quantity of the recognized object, as taught by Gao.
One would have been motivated to make such a combination in order to improve the user’s experience by more efficiently providing the user with relevant results when the image comprises multiple instances of a same recognized object (Gao, Figs. 6a-8a, ¶ [0052]-[0061]). In particular the user will save time if multiple results are able to be presented of the different recognized objects within a single image at one time instead of the user having to provide a separate image for each object in order to retrieve results for each object.
Claims 5 and 18:
The rejection of claims 1 and 11 are incorporated. Waters in view of Gao further teaches a method and apparatus wherein the displaying a result display component corresponding to a quantity of the recognized object on the second page comprises:
displaying the result display component at a predetermined position on the second page, wherein a quantity of the result display component is the same as the quantity of the recognized object (When the object is recognized a related results section is displayed, Waters: Figs. 6-12, page 13 timestamp 0:15-0:31. The number of results matches the number of objects that are recognized, Gao, Figs. 6a-8a, ¶ [0052]-[0061]), and
wherein a result display component corresponding to a first object is displayed at a middle part of the predetermined position, and the first object meets a predetermined condition (A recognized object is displayed in the middle of the area that displays recognized content (predetermined position), Gao: Fig. 8a, ¶ [0102]. The recognized object in the middle meets a similarity threshold (predetermined condition), Gao, Figs. ¶ [0052]-[0061]).
One would have been motivated to make such a combination in order to improve the user’s experience by providing an effective presentation of relevant results when the image comprises multiple instances of a same recognized object (Gao, Figs. 6a-8a, ¶ [0052]-[0061], [0102]).
Claims 6 and 19:
The rejection of claims 1 and 11 are incorporated. Waters in view of Gao further teaches a method and apparatus further comprising:
hiding or partially displaying the result display component if the quantity of the result display component is greater than a maximum display quantity of the second page (When the object is recognized, only two result items fit with the results area of the second page, Waters: Figs. 9-12, page 13 timestamp 0:15-0:31. The remaining results items are hidden until the user provides further input, Waters: Figs. 13-16.); and
displaying or completely displaying the hidden or partially displayed result display component in response to receiving a switch signal to the result display component (The user can provide input to switch from a contracted view of the results area to an extended view of the results area, Waters: Figs. 13-16).
Claim 7:
The rejection of claim 1 is incorporated. Waters in view of Gao further teaches a method wherein the result display component comprises an information display area, and the information display area is configured to display information of the object corresponding to the result display component (Waters: Figs. 13-16; Gao, Figs. 6a-8a, ¶ [0052]-[0061].).
Claim 8:
The rejection of claim 1 is incorporated. Waters in view of Gao further teaches a method wherein the third page comprises information related to the object and/or a jump portal for the information related to the object (When the user selects one of the results a third page is displayed comprising information related to the detect object, Waters: Figs. 17-18, page 13 timestamp 0:31-0:42.).
Claim 9:
The rejection of claim 1 is incorporated. Waters in view of Gao further teaches a method further comprising:
displaying prompt information in the scan area until the result display component is displayed on the second page (When the image is being analyzed, a dot animation is generated (prompt information) until the results are displayed, Waters: Figs. 6-12, page 13 timestamp 0:20-0:31.).
Claim 10:
The rejection of claim 1 is incorporated. Waters in view of Gao further teaches a method wherein the displaying a result display component corresponding to a quantity of the recognized object on the second page comprises:
displaying, in the result display component, information of an object with a maximum similarity to the object in the scan area (When the user selects the control of “QX” the content profile of “QX” is displayed, Gao: Fig. 8b, ¶ [0108]. The person corresponding to “QX” has the greatest similarity to a corresponding object in the image, Gao: ¶ [0052]); and
switching, in response to receiving an information switching signal to the result display component, information of the object displayed in the result display component to information of another similar object (When the user selects the control of “Like WZW” the content profile of “WZW” is presented, Gao: Figs. 8b-8d, ¶ [0108]-[0109]. Accordingly, the information switches from information corresponding to a first detected object (a detected person) to information corresponding to a second detected object (a detected person) that is similar to the first detected object (both objects are persons).).
One would have been motivated to make such a combination in order to improve the user’s experience by enabling the user to display the information of a corresponding object according to his/her needs (Gao: Figs. 8b-8d, ¶ [0108]-[0109].).
Claims 14 and 20:
The rejection of claims 5 and 18 are incorporated. Waters in view of Gao further teaches a method further comprising:
hiding or partially displaying the result display component if the quantity of the result display component is greater than a maximum display quantity of the second page (When the object is recognized, only two result items fit with the results area of the second page, Waters: Figs. 9-12, page 13 timestamp 0:15-0:31. The remaining results items are hidden until the user provides further input, Waters: Figs. 13-16.); and
displaying or completely displaying the hidden or partially displayed result display component in response to receiving a switch signal to the result display component (The user can provide input to switch from a contracted view of the results area to an extended view of the results area, Waters: Figs. 13-16).
Claim(s) 2, 3, 15 and 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Waters in view of Gao and further in view of Deng (US 2018/0373949 A1, published 12/27/2018, hereinafter “Deng”).
Claims 2 and 15:
The rejection of claims 1 and 11 are incorporated. Waters in view of Gao does not appear to expressly teach a method and apparatus wherein the displaying a scan area on the second page to recognize an object in the scan area comprises:
displaying a scan line moving cyclically from a start position to an end position, wherein the scan area is an area between the start position and the end position; and
stopping displaying the scan line in a case that a focusable object and an outer frame of the object are displayed in the scan area.
However, Deng teaches a method and apparatus wherein the displaying a scan area on the second page to recognize an object in the scan area comprises:
displaying a scan line moving cyclically from a start position to an end position, wherein the scan area is an area between the start position and the end position (Deng: Fig. 2, ¶ [0037]-[0038]); and
stopping displaying the scan line in a case that a focusable object and an outer frame of the object are displayed in the scan area (Deng: Fig. 2, ¶ [0032]-[0040]).
Accordingly, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the method and apparatus of Waters in view of Gao wherein the displaying a scan area on the second page to recognize an object in the scan area comprises:
displaying a scan line moving cyclically from a start position to an end position, wherein the scan area is an area between the start position and the end position; and
stopping displaying the scan line in a case that a focusable object and an outer frame of the object are displayed in the scan area, as taught by Deng.
One would have been motivated to make such a combination in order to substitute one known element (a dot animation) for another known element (moving line animation) to produce the predictable result of indicating to the user that objects are being scanned. Also, a line moving left and right across the scanning area provides a clearer indication of the scanning operation.
Claims 3 and 16:
The rejection of claims 2 and 15 are incorporated. Waters in view of Gao and further in view of Deng further teaches a method and apparatus further comprising:
displaying a first dynamic identifier in the outer frame of the object, wherein the first dynamic identifier indicates that the object in the outer frame is being recognized (The appearance of the outer frame of each object is different and is used to identify the type of object that is being recognized, Deng: Figs. 3A-4, ¶ [0032]-[0052]. The appearances of the outer frame can be changed (dynamic identifier), Deng: ¶ [0053]-[0054]).
Claim(s) 4 and 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Waters in view of Gao and further in view of Gokturk et al. (US 2012/0304125 A1, published 11/29/2012, hereinafter “Gokturk”).
Claims 4 and 17:
The rejection of claims 1 and 11 are incorporated. Waters in view of Gao does not appear to expressly teach a method and apparatus wherein the recognizing the object in the scan area comprises:
displaying, in the scan area, an anchor point and a name of the recognized object.
However, Gokturk teaches a method and apparatus wherein the recognizing the object in the image comprises:
displaying, in the image, an anchor point and a name of the recognized object (Gokturk: Figs. 1 and 20, ¶ [0051]-[0055], [0285]-[0288]).
Accordingly, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the method and apparatus of Waters in view of Gao wherein the recognizing the object in the image comprises:
displaying, in the image, an anchor point and a name of the recognized object, as taught by Gokturk.
One would have been motivated to make such a combination in order to more clearly correlate the recognized objects within the image with the results (Gokturk: Figs. 1 and 20, ¶ [0051]-[0055], [0285]-[0288]).
In implementing the labeling feature of Gokturk into the invention of Waters in view of Gao, the image (as taught by Gokturk) where the labels are displayed would correspond to a scan area since the image that corresponds to the object recognition is presented in a scanning area in the invention of Waters in view of Gao. Accordingly, in combination Waters in view of Gao and further in view of Gokturk teaches a method and apparatus wherein the recognizing the object in the scan area comprises:
displaying, in the scan area, an anchor point and a name of the recognized object.
Claim(s) 21 is/are rejected under 35 U.S.C. 103 as being unpatentable over Waters in view of Gao, further in view of Gray et al. (US 9,841,879 B1, issued 12/12/2017, hereinafter “Gray”) and further in view of Tsuchimochi (US 2018/0068533 A1, published 03/08/2018, hereinafter “Tsuchimochi”).
Claim 21:
The rejection of claim 1 is incorporated. Waters in view of Gao does not appear to expressly teach a method further comprising:
displaying second prompt information on the second page in response to no object being recognized in the scan area within a predetermined time period; and
providing a re-recognition component on the second page, and re-recognizing the object in the scan area in response to detecting a trigger signal to the re-recognition component.
However, Gray teaches a method comprising:
displaying second prompt information on the second page in response to no object being recognized in the scan area (When no objects can be recognized in the image, firefly indicators can form a question mark on the display where the image is being analyzed (scan area), Gray: Figs. 3(d), column 9 lines 41-50. The firefly indicators can be provided for both live and still images, Gray: column 4 lines 26-52.); and
providing a re-recognition component on the second page, and re-recognizing the object in the scan area in response to detecting a trigger signal to the re-recognition component (The fireflies can form into an image of a selectable element that will enable to user to re-recognize the object in the scan area, Gray: column 9 lines 61-67 and column 10 lines 1-19.).
Accordingly, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the method of Waters in view of Gao to comprise:
displaying second prompt information on the second page in response to no object being recognized in the scan area; and
providing a re-recognition component on the second page, and re-recognizing the object in the scan area in response to detecting a trigger signal to the re-recognition component, as taught by Gray.
One would have been motivated to make such a combination in order to improve the user’s experience by better assisting the user in enabling object recognition when the image quality is preventing objects from being recognized and to provide better capabilities for object recognition by enabling objects to recognized on live and still images (Gray: column 4 lines 26-52 and column 9 lines 41-67 and column 10 lines 1-19.).
Water in view of Gao and further in view of Gray does not appear to expressly teach a method wherein no objects being recognized is within a predetermined time period.
However, wherein no objects being recognized in the scan area is within a predetermined time period (Tuschimochi: ¶ [0080].).
Accordingly, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the method of Waters in view of Gao and further in view of Gray wherein no objects being recognized in the scan area is within a predetermined time period, as taught by Tuschimochi.
One would have been motivated to make such a combination in order to provide an effective means for making a determination that no recognition can be made (Tuschimochi: ¶ [0080].).
Response to Arguments
Applicant’s prior art arguments have been fully considered but they are not persuasive.
In regards to claims 1, 11 and 13 Applicant argues that the prior art of record does not teach “displaying a scan area on the second page to recognize an object in the scan area” because the Google Lens feature demonstrated in Waters is of a pre-existing photograph and not a live-scanning interaction on a dedicated second page and because the dots in Waters represents a loading animation while the system performs backend recognition on a static image which is different than the claimed invention that requires a scan area with scan lines that displays recognized objects with outer frames which allows users to actively scan and recognize multiple objects in real time (Remarks: pages 7 and 8). Examiner respectfully disagrees.
As an initial matter, claims 1, 11 and 13 does not specify whether the object recognition is being performed on a live image feed or a pre-existing static image. As such, whether or not Waters teaches object recognition on a live image or static image is irrelevant.
Secondly, Examiner considers object recognition to be a scanning operation. Accordingly, any area of the screen that displays an image (live or static) that is going to or is currently undergoing object recognition to be a “scan area.” Waters teaches that the displayed image undergoes object recognition. Accordingly, the area where the image is displayed is considered a “scan area.” Applicant’s argument that the scan area requires scan lines and outer frames does not reflect the limitations of claims 1, 11 and 13. Such features are part of claims 2 and 3. Examiner does not rely on Waters to teach these features, rather Examiner relies on Deng. Accordingly, the scan lines and outer frame features are irrelevant for the limitations of claims 1, 11 and 13.
Applicant also argues, in regards to claims 1, 11 and 13, that the prior art of record does not teach “displaying, in response to recognizing the object in the scan area, a result display component corresponding to a quantity of the recognized object on the second page” because the multiple recognition results of Gao represent different potential matches for the same face based on similarity rankings-not distinct result display components corresponding to distinct recognized object (Remarks: pages 8-9). Examiner respectfully disagrees.
Applicant has misconstrued the teachings of Gao. Figs. 4 of Gao shows different objects A, B, C, D and E. In this case, the objects correspond to the faces of people. Paragraph [0052] of Gao teaches that “It is assumed that after a command for capturing a screenshot is received, through image recognition, keywords “QX”, “LT”, “WZW”, “YZ”, and “JX” are respectively the recognition results corresponding to the objects A, B, C, D, and E, and corresponding content profile are obtained. Moreover, similarity between facial information of “QX” and facial information of the subject A is 95%, similarity between facial information of “LT” and facial information of the subject B is 81%, similarity between facial information of “WZW” and facial information of the subject C is 87%, similarity between facial information of “YZ” and facial information of the subject D is 75%, and similarity between facial information of “JX” and facial information of the subject E is 50%.” Accordingly, each object is associated with a corresponding recognition result with varying similarity scores. In a particular embodiment, if the recognized objects surpass a particular threshold, a recognition result is displayed for each one of objects that surpass said particular threshold (result display component corresponding to a quantity of the recognized object” (Gao: Figs. 4 and 6a, ¶ [0053]-[0058]). Accordingly, Applicant’s argument is unpersuasive.
Applicant also argues, in regards to claims 1, 11 and 13, that the motivation to combine Gao with Waters does not logically apply to the proposed combination because Waters is designed for identifying unknown objects to help the user learn what those objects are but Gao is designed for identifying known celebrity faces in video screenshots to provide actor or character information and incorporating Gao’s face recognition results with Water’s general object identification system would fundamentally alter Water’s principal operation and would render Waters unsatisfactory for its intended purpose of general object identification (Remarks: page 9). Examiner respectfully disagrees.
Waters teaches that “you can also use google lens for identifying plants animals buildings you name it you can try it” (Waters: page 13 timestamps 0:42-0:48). Accordingly, there is no apparent limitation to what can be identified. Whether an object is unknown by the user depends on the user not on the application. The application provides information associated with an object in an image, if the user does not already know the object they could be learning about the object for the first time, if the user already knows the object, he/she may be looking for additional information about the object to learn more about it. The same is true in Gao. Gao provides information about recognized people in an image (screenshot). For example paragraph [0053] of Gao teaches that “A screenshot area (a face region) corresponding to the object A in the screenshot image shown in FIG. 4, the keyword “QX” matching with the object A, and content profile related to “QX” that “QX, born on Nov. 23, 1993 in . . . , officially entered the show business by playing a role in the family drama “Qi Jiu He Kai” . . . ” are displayed in a bar 8220 for displaying a recognized character.” Gao provides information on a recognized object, whether or not the object is unknown by the user is also determined by the user not the application. Adding Gao’s facial recognition feature does not take away any features from Waters, instead it provides additional capabilities for recognizing more types of objects. Accordingly, Applicant’s arguments are unpersuasive.
Applicant also argues, in regards to claims 1, 11 and 13, that the prior art of record does not teach “jumping from the second page to a third page in response to detecting a trigger signal to the result display component, wherein a content of the third page is related to an object corresponding to the result display component” because waters shows clicking on search results that redirect to external shopping websites, the third claimed page must be within the application’s UI (Remarks: page 10). Examiner respectfully disagrees.
No where in claims 1, 11 and 13 does it require the second and third pages to be in the same application’s UI. Accordingly, Applicant’s argument is not persuasive.
In regards to claims 2 and 15, Applicant argues that the prior art of record does not teach “displaying a scan line moving cyclically from a start position to an end position, wherein the scan area is an area between the start position and the end position; and stopping displaying the scan line in a case that a focusable object and an outer frame of the object are displayed in the scan area” because Deng does not teach that the specific condition that the scan line stops when both a “focusable object” and “an outer frame of the object” are displayed in the scan area (Remarks: page 10). Examiner respectfully disagrees.
Deng teaches that an animation of the identification box (item 220 in Fig. 2) includes moving the scan line left and right or up and down during scanning the object (Deng: ¶ [0037]-[0038]). The scanning is stopped when an item is recognized (focusable object) and an identification box having an appearance that matches the object is displayed (stopping displaying the scan line in a case that a focusable object and an outer frame of the object are displayed in the scan area), Deng: Fig. 2, ¶ [0032]-[0040]. Accordingly, Deng fully teaches the limitations of claim 2.
In regards to claims 5 and 18, Applicant argues that the prior art of record does not teach the limitations of claim 5 and 18 because the ordering of the recognition results of Gao relates to similarity rankings of potential matches for a single face, not the spatial arrangement of result components corresponding to different recognized objects (Remarks: pages 10 and 11). Examiner respectfully disagrees.
Again, Applicant has misconstrued the teachings of Gao. Figs. 4 of Gao shows different objects A, B, C, D and E. In this case, the objects correspond to the faces of people. Paragraph [0052] of Gao teaches that “It is assumed that after a command for capturing a screenshot is received, through image recognition, keywords “QX”, “LT”, “WZW”, “YZ”, and “JX” are respectively the recognition results corresponding to the objects A, B, C, D, and E, and corresponding content profile are obtained. Moreover, similarity between facial information of “QX” and facial information of the subject A is 95%, similarity between facial information of “LT” and facial information of the subject B is 81%, similarity between facial information of “WZW” and facial information of the subject C is 87%, similarity between facial information of “YZ” and facial information of the subject D is 75%, and similarity between facial information of “JX” and facial information of the subject E is 50%.” Accordingly, each object is associated with a corresponding recognition result with varying similarity scores. In a particular embodiment, if the recognized objects surpass a particular threshold, a recognition result is displayed for each one of objects that surpass said particular threshold (result display component corresponding to a quantity of the recognized object” (Gao: Figs. 4 and 6a, ¶ [0053]-[0058]). Accordingly, Applicant’s argument is unpersuasive.
In response to the Advisory Action, Applicant argues that the prior art does not teach a scan area on the second page because the concept of a scan area as recited in the claims implies a defined region of the interfaced within which objects are identified, framed, and correlated to corresponding result display components and Waters does not teach this (Remarks: page 11). Examiner respectfully disagrees.
As presented in the advisory action and above, Waters teaches that the displayed image undergoes object recognition. Accordingly, the area where the image is displayed is considered a “scan area.”
In response to the Advisory Action, Applicant argues that the resulting display mechanism in Gao does not present result display components simultaneously on a second page alongside a scan area in a one-to-one correspondence with recognized objects (Remarks: page 12). Examiner respectfully disagrees.
Examiner relies on a combination of references to teach this feature (see the rejection of claims 5 and 18 above). One cannot show nonobviousness by attacking references individually where the rejections are based on combinations of references. See In re Keller, 642 F.2d 413, 208 U.S.P.Q. 871 (C.C.P.A. 1981); In re Merck & Co., 800 F.2d 1091, 231 U.S.P.Q. 375 (Fed. Cir. 1986).
In response to the Advisory Action, Applicant argues that no articulated reasoning as to why a person of ordinary skill in the are would undergo a redesign to arrive at the claimed three-page workflow with quantity-correlated result display components (Remarks: page 12). Examiner respectfully disagrees.
The rejection above states “One would have been motivated to make such a combination in order to improve the user’s experience by more efficiently providing the user with relevant results when the image comprises multiple instances of a same recognized object (Gao, Figs. 6a-8a, ¶ [0052]-[0061]). In particular the user will save time if multiple results are able to be presented of the different recognized objects within a single image at one time instead of the user having to provide a separate image for each object in order to retrieve results for each object.”
Therefore, Examiner respectfully asserts that the cited art sufficiently teaches the limitations recited in the claims.
Conclusion
Examiner has cited particular columns and line and/or paragraph numbers in the references applied to the claims above for the convenience of the applicant. Although the specified citations are representative of the teachings of the art and are applied to specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested from the applicant in preparing responses, to fully consider the references in entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the examiner.
The examiner requests, in response to this Office action, support be shown for language added to any original claims on amendment and any new claims. That is, indicate support for newly added claim language by specifically pointing to page(s) and line number(s) in the specification and/or drawing figure(s). This will assist the examiner in prosecuting the application.
When responding to this office action, Applicant is advised to clearly point out the patentable novelty which he or she thinks the claims present, in view of the state of the art disclosed by the references cited or the objections made. He or she must also show how the amendments avoid such references or objections See 37 CFR 1.111(c).
The prior art made of record and not relied upon is considered pertinent to Applicants’ disclosure.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DANIEL RODRIGUEZ whose telephone number is (571)272-3633. The examiner can normally be reached Monday-Friday 5:30 am - 2:30 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Stephen Hong can be reached at (571) 272-4124. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/DANIEL RODRIGUEZ/Primary Examiner, Art Unit 2178