Prosecution Insights
Last updated: April 19, 2026
Application No. 18/973,676

INTERACTIVE OBJECT DISPLAYING STRUCTURES AND METHODS OF USE

Final Rejection §103
Filed
Dec 09, 2024
Examiner
POLO, GUSTAVO D
Art Unit
2622
Tech Center
2600 — Communications
Assignee
Pharmavision LLC
OA Round
4 (Final)
85%
Grant Probability
Favorable
5-6
OA Rounds
2y 3m
To Grant
98%
With Interview

Examiner Intelligence

Grants 85% — above average
85%
Career Allow Rate
646 granted / 761 resolved
+22.9% vs TC avg
Moderate +13% lift
Without
With
+12.7%
Interview Lift
resolved cases with interview
Typical timeline
2y 3m
Avg Prosecution
11 currently pending
Career history
772
Total Applications
across all art units

Statute-Specific Performance

§101
1.4%
-38.6% vs TC avg
§103
50.5%
+10.5% vs TC avg
§102
37.2%
-2.8% vs TC avg
§112
8.8%
-31.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 761 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Stark Pub. No. US 2013/0090996 A1 [Stark] in view of Parshin et al. Patent No. US 10,762,411 B1 [Parshin] and further in view of Jenkins et al. Pub. No. US 2023/0410037 A1 [Jenkins]. 1. Stark discloses an interactive object displaying apparatus [Fig. 1], the interactive object displaying apparatus comprising: at least a structure comprising one or more shelves [¶ 9], wherein the one or more shelves are configured to receive at least an object [Fig. 2 by means of 26, for instance]; a display device coupled to the at least a structure [¶ 20 each compartment is a display screen 22a, 22b, 22c or 22d (individually and collectively display screen(s) 22)], wherein the display device is communicatively connected to at least a sensor [¶ 23], wherein the display device is configured to display a content [¶¶ 19-20 34b is associated with compartment 32b; sensor 34c is associated with compartment 32c, and so on. Display screens 22 and content sensors 34 are interconnected with one or more computing devices 30 operable to monitor the content of each compartment and thereby the "state" of shelf unit 12, and present video images on screens]; the at least a sensor coupled to the display device, wherein the at least a sensor is configured to detect sensor data [¶¶ 19-23]; and a controller communicatively connected to the at least a sensor and the display device, wherein the controller is configured to transmit the content to the display device [Fig. 3, 30 & ¶ 27]. Stark does not explicitly disclose the added limitations: “wherein the at least a sensor is located on a flat surface of a housing of the display device.” However Parshin teaches a smart shelf apparatus with a sensor on a flat front surface of a housing [Fig. 2B, 112(2) for instance]. Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Stark with Parshin, since such a modification optimizes sensing through location placement. Stark in view of Parshin is silent on wherein the sensor data comprises a proximity signal based on a predetermined range from the at least a sensor. However, Jenkins teaches a smart shelf monitoring system [Figs. 7-8 & ¶ 70, for instance] wherein a user’s device within a range of sensor provides an alert [see ¶¶ 71-78, for instance]. Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Stark in view of Parshin with Jenkins as required, since such a modification allows for optimized tracking and updating of devices. 2. Stark in view of Parshin and further in view of Jenkins teaches wherein the at least a sensor comprises a motion detection sensor, wherein the motion detection sensor [Stark ¶ 36 sensor of machine vision] is configured to: detect a motion signal [id. presence vs. non-presence, for instance]; transmit the motion signal to the controller [¶ 37 using software]; and display, using the controller, the content [¶ 45]. 3. Stark in view of Parshin and further in view of Jenkins teaches wherein the display device comprises a structure display device, wherein the structure display device is configured to display the content as a function of the at least an object [Stark ¶ 32 use sensors 34 to sense the contents of each compartment 32. Under software control, computing device 30 may then control what is being displayed on each of screens 22 in dependence of the contents of compartments]. 4. Stark in view of Parshin and further in view of Jenkins teaches wherein displaying the content as a function of the at least an object comprises: identifying, using an object unique identifier of the sensor data, object data [Stark ¶¶ 32-33]; transmitting the object data to the controller [¶ 32]; and generating, using the controller, the content [id. computing device 30 using conventional machine vision techniques may use sensors 34 to sense the contents of each compartment 32. Under software control, computing device 30 may then control what is being displayed on each of screens]. 5. Stark in view of Parshin and further in view of Jenkins teaches wherein the at least a structure comprises reconfigurable elements, wherein the reconfigurable elements are configured to accommodate at least an object of varying sizes [Stark ¶ 48 & Figs. 1 and 6A-6B, for instance]. 6. Stark in view of Parshin and further in view of Jenkins teaches wherein the sensor data comprises geolocation data [Parshin col. 4, ll. 24-42]. 7. Stark in view of Parshin and further in view of Jenkins teaches wherein the display device is configured to: receive user input processed by an event handler [Stark ¶ 33 user interaction]; and respond to the user input by executing at least an action [¶ 33 it may determine the proximity of another item, or a consumer's hand]. 8. Stark in view of Parshin and further in view of Jenkins teaches wherein the at least a user input comprises one or more of a touch input and an audio input [Stark ¶¶ 29-32 where user interacts with objects]. 9. Stark in view of Parshin and further in view of Jenkins teaches wherein the at least an action comprises one or more of changing the content on the display device and triggering a notification to a downstream device [Stark ¶ 36 interface component 84 may receive user commands by way of a network such as a wireless or inter network]. 10. Stark in view of Parshin and further in view of Jenkins teaches the at least a sensor comprises an optical sensor, wherein the optical sensor is configured to: capture visual data [Stark ¶ 55]; process the visual data to identify one or more characteristics of the object and environment [¶ 46 with respect to shelf environment]; and generate, using the controller, the content based on the identified characteristics of the object and environment [¶ 42]. 11. Stark discloses an interactive object displaying apparatus [Fig. 1], the interactive object displaying apparatus comprising: at least a structure comprising one or more shelves [¶ 9], wherein the one or more shelves are configured to receive at least an object [Fig. 2 by means of 26, for instance]; a display device coupled to the at least a structure [¶ 20 each compartment is a display screen 22a, 22b, 22c or 22d (individually and collectively display screen(s) 22)], wherein the display device is communicatively connected to at least a sensor [¶ 23], wherein the display device is configured to display a content [¶¶ 19-20 34b is associated with compartment 32b; sensor 34c is associated with compartment 32c, and so on. Display screens 22 and content sensors 34 are interconnected with one or more computing devices 30 operable to monitor the content of each compartment and thereby the "state" of shelf unit 12, and present video images on screens]; the at least a sensor coupled to the display device, wherein the at least a sensor is configured to detect sensor data [¶¶ 19-23]; and a controller communicatively connected to the at least a sensor and the display device, wherein the controller is configured to transmit the content to the display device [Fig. 3, 30 & ¶ 27]. Stark does not explicitly disclose the added limitations: “wherein the at least a sensor is located on a flat surface of a housing of the display device.” However Parshin teaches a smart shelf apparatus with a sensor on a flat front surface of a housing [Fig. 2B, 112(2) for instance]. Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Stark with Parshin, since such a modification optimizes sensing through location placement. Stark in view of Parshin is silent on wherein the sensor data comprises a proximity signal based on a predetermined range from the at least a sensor. However, Jenkins teaches a smart shelf monitoring system [Figs. 7-8 & ¶ 70, for instance] wherein a user’s device within a range of sensor provides an alert [see ¶¶ 71-78, for instance]. Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Stark in view of Parshin with Jenkins as required, since such a modification allows for optimized tracking and updating of devices. 12. Stark in view of Parshin and further in view of Jenkins teaches wherein the at least a sensor comprises a motion detection sensor, wherein the motion detection sensor [Stark ¶ 36 sensor of machine vision] is configured to: detect a motion signal [id. presence vs. non-presence, for instance]; transmit the motion signal to the controller and display, using the controller, the content [¶ 45]. 13. Stark in view of Parshin and further in view of Jenkins teaches wherein the display device comprises a structure display device, wherein the structure display device is configured to display the content as a function of the at least an object [Stark ¶ 32 use sensors 34 to sense the contents of each compartment 32. Under software control, computing device 30 may then control what is being displayed on each of screens 22 in dependence of the contents of compartments]. 14. Stark in view of Parshin and further in view of Jenkins teaches wherein displaying the content as a function of the at least an object comprises: identifying, using an object unique identifier of the sensor data, object data [Stark ¶¶ 32-33]; transmitting the object data to the controller [¶ 32]; and generating, using the controller, the content [id. computing device 30 using conventional machine vision techniques may use sensors 34 to sense the contents of each compartment 32. Under software control, computing device 30 may then control what is being displayed on each of screens]. 15. Stark in view of Parshin and further in view of Jenkins teaches wherein the at least a structure comprises reconfigurable elements, wherein the reconfigurable elements are configured to accommodate at least an object of varying sizes [Stark ¶ 48 & Figs. 1 and 6A-6B, for instance]. 16. Stark in view of Parshin and further in view of Jenkins teaches wherein the sensor data comprises geolocation data [Parshin col. 4, ll. 24-42]. 17. Stark in view of Parshin and further in view of Jenkins teaches wherein the display device is configured to: receive user input processed by an event handler [Stark ¶ 33 user interaction]; and respond to the user input by executing at least an action [¶ 33 it may determine the proximity of another item, or a consumer's hand]. 18. Stark in view of Parshin and further in view of Jenkins teaches wherein the at least a user input comprises one or more of a touch input and an audio input [Stark ¶¶ 29-32 where user interacts with objects]. 19. Stark in view of Parshin and further in view of Jenkins teaches wherein the at least an action comprises one or more of changing the content on the display device and triggering a notification to a downstream device [Stark ¶ 36 interface component 84 may receive user commands by way of a network such as a wireless or inter network]. 20. Stark in view of Parshin and further in view of Jenkins teaches the at least a sensor comprises an optical sensor, wherein the optical sensor is configured to: capture visual data [Stark ¶ 55]; process the visual data to identify one or more characteristics of the object and environment [¶ 46 with respect to shelf environment]; and generate, using the controller, the content based on the identified characteristics of the object and environment [¶ 42]. Response to Arguments Applicant's arguments filed on 03 December 2025 are fully considered but they are not persuasive. On p. 7 of 9, Applicant argues that Stark in view of Parshin does not teach or suggest the “wherein the at least a sensor is located on a flat surface of a housing of the display device” limitation. The examiner disagrees. Stark [Fig. 1, for instance] teaches some ports for sensors [34]. Parshin teaches a smart shelve apparatus [Fig. 2A, for instance] with sensors on flat surfaces [108]. When modified [for motivations detailed above], the shelf of Stark which is prepped for sensor that include the feature of those sensors being on a flat surface. As per the housing, the shelving itself is the housing for the display and thereby carry the sensor on its flat surface when modified. The rejection is maintained and made final. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to GUSTAVO POLO whose telephone number is (571)270-7613. The examiner can normally be reached Mon-Fri 9am-5pm PT. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Patrick Edouard can be reached at (571)272-7603. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Gustavo Polo/Primary Examiner, Art Unit 2622
Read full office action

Prosecution Timeline

Dec 09, 2024
Application Filed
Feb 25, 2025
Non-Final Rejection — §103
Apr 09, 2025
Interview Requested
Apr 28, 2025
Applicant Interview (Telephonic)
Apr 28, 2025
Examiner Interview Summary
May 01, 2025
Response Filed
May 30, 2025
Final Rejection — §103
Aug 24, 2025
Request for Continued Examination
Aug 26, 2025
Response after Non-Final Action
Aug 29, 2025
Non-Final Rejection — §103
Dec 03, 2025
Response Filed
Jan 13, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12592210
DISPLAY PANEL AND A DISPLAY APPARATUS HAVING THE SAME WITH MODIFIED SHIELD ELECTRODE
2y 5m to grant Granted Mar 31, 2026
Patent 12573351
ENHANCED REFRESH RATE SELECTION
2y 5m to grant Granted Mar 10, 2026
Patent 12567382
DATA DRIVER CONFIGURED TO PROVIDE DIFFERENT GAMMA SIGNALS TO RESPECTIVE AREAS AND DISPLAY DEVICE INCLUDING THE SAME
2y 5m to grant Granted Mar 03, 2026
Patent 12555512
DISPLAY PANEL AND METHOD OF CONTROLLING SAME BASED ON CORRESPONDENCE BETWEEN COMPOSITE SIGNALS AND PIXELS
2y 5m to grant Granted Feb 17, 2026
Patent 12555552
DISPLAY DEVICE, GAMMA VOLTAGE DATA GROUP SWITCHING METHOD AND MODULE
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
85%
Grant Probability
98%
With Interview (+12.7%)
2y 3m
Median Time to Grant
High
PTA Risk
Based on 761 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month