DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Acknowledgment is made of applicant's claim for foreign priority under 35 U.S.C. 119 (a)-(d). The certified copy has been filed in parent Application No. 16489374, filed on 08/28/2019.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 2-4, 7-9, and 12-14 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claims 2-4, 7-9, and 12-14 recite the limitation "the marker". There is insufficient antecedent basis for this limitation in the claims and their base claim(s).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1, 5, 6, 10, 11, and 15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Marman (US 20120062732 A1), and in view of Wang et al. (US 20150356840 A1).
Regarding Claim 6, Marman discloses A display system (ABST reciting “A video system”) comprising:
a processor configured to execute the instructions to: (¶22 reciting “Video analytics 120 may be implemented in software and reside on a processor or may be implemented in hardware in a specialized video processor”.)
acquire a video from an imaging device; (¶21 reciting “imager 115 of camera 110 captures multiple images (e.g., video frames) of the field of view and produces a first set of image data representing the images of the field of view.”; and further, ¶22 reciting “The first set of image data may be communicated directly from imager 115 to video analytics 120”)
when a moving object is detected in a first position in the video, cause a display device to display a first marker indicating a position of the moving object at a position corresponding to the first position; (¶45 reciting “ a colored bounding box 380 may be generated and superimposed over the image of the person when video analytics 120 detect and track the person.” Fig. 17)
when the moving object moves to a second position from the first position, cause the display device to display a second marker indicating a position of the moving object at a position corresponding to the second position; (¶45 reciting “As the person moves through the scene, the cropped-close up images presented in zoomed-in tracking window 355 automatically track the movement of the person. Moreover, dashed outline box 370 moves relative to scene viewing window 365 in unison with movement of the person. ” Fig. 17)
cause the display device to display information based on an event detected in the video, together with the first marker and the second marker; and
in response to accepting an operation to the information based on the event, cause the display device to display an image at the point in time corresponding to the event, and a third marker at a third position corresponding to a position of the moving object in the image.
(Fig. 17 showing a scene representing a time capsule of a moving object 1700. ¶103 reciting “display management module 340 superimposes images 1705, 1710, 1715, 1720, 1725, 1730, and 1735 over a background image of the scene. Preferably, display management module 340 also produces a visible time stamp for each extracted snapshot corresponding to a time when the corresponding snapshot was captured by imager 115. For example, image 1705 includes a time stamp 00:01 above it indicating that image 1705 was captured within the first second of detection of object 1700. The time stamps create a time line so the user can understand the movement of object 1700 through the scene. Display management module 340 can generate other forms of time compression for video information. For example, display management module 340 can generate a graphical timeline for an object that includes one or both of associated time capsules and close-up snapshots. The user can move a cursor over a specific point in the time line to provide a pop-up view of the time capsule or snapshot corresponding to the specific point in time. The user can then click on the pop-up window to view an intelligent fast playback clip, a cropped close-up video clip, or the original video clip.”)
Marman discloses “Video analytics 120 may be implemented in software” (¶22) and “the term "module" is a component that may comprise one or more hardware circuits or devices or one or more software routines, functions, or objects.” (¶23). However, Marman does not explicitly disclose a memory configured to store instructions.
Wang teaches “an information processing system that can be used in a surveillance camera system” (¶2). Specifically, ¶326 recites “A non-transitory computer-readable medium having embodied thereon a program, which when executed by a computer causes the computer to perform a method, the method including: obtaining a plurality of segments compiled from at least one media source, wherein each segment of the plurality of segments contains at least one image frame within which a specific target object is found to be captured; and providing image frames of the obtained plurality of segments for display along a timeline and in conjunction with a tracking status indicator that indicates a presence of the specific target object within the plurality of segments in relation to time.”
It would have been obvious to one with ordinary skill, before the effective filing date of the claimed invention, to implement system (taught by Marman) by means of a general-purpose computer, with the processing modules being implemented by reading software programs corresponding to the processing of each processing section stored in computer memory, and having these programs executed by the computer (taught by Wang). The suggestions/motivations would have been to apply a known technique to a known device (method, or product) ready for improvement to yield predictable results.
Claim 1, has similar limitations as of Claim(s) 6, therefore it is rejected under the same rationale as Claim(s) 6.
Claim 11, has similar limitations as of Claim(s) 6, therefore it is rejected under the same rationale as Claim(s) 6.
Regarding Claim 10. Marman in view of Wang discloses The display system according to claim 6,
wherein the event includes detection of the moving object. (Marman, ¶45 reciting “a colored bounding box 380 may be generated and superimposed over the image of the person when video analytics 120 detect and track the person. As the person moves through the scene, the cropped-close up images presented in zoomed-in tracking window 355 automatically track the movement of the person. Moreover, dashed outline box 370 moves relative to scene viewing window 365 in unison with movement of the person.”)
Claim 5, has similar limitations as of Claim(s) 10, therefore it is rejected under the same rationale as Claim(s) 10.
Claim 15, has similar limitations as of Claim(s) 10, therefore it is rejected under the same rationale as Claim(s) 10.
Claim(s) 2-3, 7-8, and 12-13 is/are rejected under 35 U.S.C. 103 as being unpatentable over Marman (US 20120062732 A1), and in view of Wang et al. (US 20150356840 A1), and further in view of MacDougall et al. (US 20130205203 A1).
Regarding Claim 7, Marman in view of Wang discloses The display system according to claim 6.
However, Marman in view of Wang does not explicitly disclose wherein the processor execute a process of changing a display mode of the marker displayed on the display device, as time passes.
It is well known in the art to change a display marker color over time. In addition, MacDougall teaches “For example, the position tracking program may make a corresponding position on heat map position tracking interface 74 progressively a lighter and lighter shade from black to white, or a progressively different color starting from black and moving through dark red, bright red, orange, yellow, white, and blue, for example.” (¶41). In other words, MacDougall teaches changing a display mode of the displayed marker as time passes.
It would have been obvious to one with ordinary skill, before the effective filing date of the claimed invention, to modify the system (taught by Marman in view of Wang) to change a display mode of a displayed marker as time passes (taught by MacDougall). The suggestions/motivations would have been the position tracking interface may “help a user keep track of the positions within the document where the user has made edits or has spent time viewing the document,” (¶14), and to apply a known technique to a known device (method, or product) ready for improvement to yield predictable results.
Regarding Claim 8. Marman in view of Wang and MacDougall discloses The display system according to claim 7,
wherein, in the process of changing the display mode of the marker, the processor changes a color of the marker. (See Claim 7 rejections for detailed analysis.)
Claim 2, has similar limitations as of Claim(s) 7, therefore it is rejected under the same rationale as Claim(s) 7.
Claim 3, has similar limitations as of Claim(s) 8, therefore it is rejected under the same rationale as Claim(s) 8.
Claim 12, has similar limitations as of Claim(s) 7, therefore it is rejected under the same rationale as Claim(s) 7.
Claim 13, has similar limitations as of Claim(s) 8, therefore it is rejected under the same rationale as Claim(s) 8.
Claim(s) 4, 9, and 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Marman (US 20120062732 A1), and in view of Wang et al. (US 20150356840 A1), and further in view of Flack et al. (US 20170244908 A1).
Regarding Claim 9. Marman in view of Wang discloses The display system according to claim 6, displays the marker on the background image. (Marman, Fig. 17)
However, Marman in view of Wang does not explicitly disclose wherein the processor receives an input specifying a background image from a user.
It is well known in the art to replace a background with a new one specified by a user. In addition Flack teaches “The user selects 80 new background content to be used to replace the background portion in the video stream. For example, as shown in FIG. 7, the new background content in this example is an image of a country scene 64.” (¶134). Further, ¶136 recites “As shown in FIG. 8, the result in this example is a composite video stream 66 that includes the foreground portion (the user) 60 superimposed on the selected background content 64.”
It would have been obvious to one with ordinary skill, before the effective filing date of the claimed invention, to modify the system (taught by Marman in view of Wang) receive an input specifying a background from a user (taught by Flack). The suggestions/motivations would have been to apply a known technique to a known device (method, or product) ready for improvement to yield predictable results.
Claim 4, has similar limitations as of Claim(s) 9, therefore it is rejected under the same rationale as Claim(s) 9.
Claim 14, has similar limitations as of Claim(s) 9, therefore it is rejected under the same rationale as Claim(s) 9.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to YI WANG whose telephone number is (571)272-6022. The examiner can normally be reached 9am - 5pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jason Chan can be reached at (571)272-3022. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/YI WANG/Primary Examiner, Art Unit 2619