DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Acknowledgment is made of applicant's claim for foreign priority based on an application filed in Republic of Korea on 29 June 2023. It is noted, however, that applicant has not filed a certified copy of the foreign priority application as required by 37 CFR 1.55. An attempt was made by the United States Patent and Trademark Office under the Priority Document Exchange program to automatically retrieve the foreign priority document, but that attempt was unsuccessful. Applicant must furnish the required foreign priority document.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1-5, 7, 12, and 13 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Kim et al. (US 2012/0304085; hereinafter “Kim”).
Regarding claim 1, Kim discloses A method for a smart surveillance (“surveillance systems,” para. 2), comprising: receiving input data including image data obtained from a plurality of cameras (“video data streams are generated by cameras,” para. 34); detecting object coordinates indicating a location of an object present in a target area from the input data (“identifies first number of locations 122 [of Fig. 1] in images 114 for number of objects 115,” para. 45); calculating object mapping coordinates in a virtual three-dimensional (3D) space that correspond to the object coordinates in the target area (“model 135 [of Fig. 1] may be … a three-dimensional model of area 112,” para. 53; “Coordinate system 124 [of Fig. 1] for images 114 may map to geographic coordinate system 134 for model 135,” para. 55); placing a virtual object corresponding to the object in the virtual 3D space corresponding to the target area on the basis of the calculated object mapping coordinates; and generating display data for displaying the virtual 3D space including the virtual object (“forms number of graphical representations 126 [of Fig. 1] for number of objects 115,” para. 47; “displays number of graphical representations 126 [of Fig. 1] on model 135,” para. 61), wherein an image of the virtual object displayed in the virtual 3D space changes according to user requests or preset conditions (“user input 150 [of Fig. 1] may be received selecting graphical representation 129 … When graphical representation 129 is selected, information 148 is displayed in association with graphical representation 129,” para. 64).
Regarding claim 2, Kim discloses wherein the image of the virtual object displayed in the virtual 3D space includes a virtual image of the object that is generated from an object image captured by at least one camera among the plurality of cameras (“forms number of graphical representations 126 [of Fig. 1] for number of objects 115 using images 114 … a picture, a two-dimensional or three-dimensional image,” para. 47).
Regarding claim 3, Kim discloses wherein the virtual image is generated to reflect one or more of object characteristics including a color, a shape, a motion, and a size of the object corresponding to the virtual image (“A graphical representation … selected from one of, for example, without limitation, an icon, a graphical element, a symbol, a label, a shape, a picture, a two-dimensional or three-dimensional image,” para. 47).
Regarding claim 4, Kim discloses wherein, as a user request for detailed information about the virtual object or as the preset conditions are satisfied, an object image of the object corresponding to the virtual object that is obtained by the camera is provided (“a selection of graphical representation 129 [of Fig. 1] may cause information 148, in the form of number of videos 168, to be displayed in association with graphical representation 129 … number of videos 168 may be substantially real-time videos,” para. 71).
Regarding claim 5, Kim discloses wherein the object image includes a best shot or an event detection shot among images obtained by photographing the object corresponding to the virtual object (“graphical indicator 324 [of Fig. 3] is positioned over video 322 in window 320 to provide a location in video 322 for the person corresponding to person icon 310,” para. 87; the video 322 of Fig. 3 can be considered an object image, and the graphical indicator 324 can be considered an event detection because it is detecting a person moving around in the video).
Regarding claim 7, Kim discloses wherein the object image includes the event detection shot which is the object image captured upon detecting an event predetermined by a user (“identifies number of objects 115 [of Fig. 1] in images 114 in video data streams 110,” para. 45; “forms number of graphical representations 126 [of Fig. 1] for number of objects 115 using images 114,” para. 47; identifying an object is considered an event that is predetermined by a user).
Regarding claim 12, Kim discloses A program stored in a recording medium so that a computer performs the method according to claim 1 (“a computer program product comprises a computer readable storage medium,” para. 13).
Regarding claim 13, it is rejected using the same citations and rationales described in the rejection of claim 1, with the additional limitations of A smart surveillance and control device comprising: a memory configured to store input data; and a processor coupled to the memory, wherein the processor is configured to perform operations (“Processor unit 1404 [of Fig. 14] serves to execute instructions for software that may be loaded into memory,” para. 147).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over Kim in view of Matsushita et al. (JP 2006214231; hereinafter “Matsushita”; a machine translation is provided for citations).
Regarding claim 6, Kim does not disclose wherein the object image includes the best shot which is the object image with a highest object identification score calculated based on a size of an area occupied by the object and an orientation and sharpness of the object.
In the same art of video surveillance, Matsushita teaches wherein the object image includes the best shot which is the object image with a highest object identification score calculated based on a size of an area occupied by the object and an orientation and sharpness of the object (“select the best shot image that is most suitable for identifying the face of a passerby from among the detected face images. The best shot image is selected based on information such as the size, brightness, face direction, and focus of the face image,” para. 18 of machine translation).
Before the effective filing date of the claimed invention, it would have been obvious to one having ordinary skill in the art to apply the teachings of Matsushita to Kim. The motivation would have been to improve image recognition.
Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over Kim in view of Konen et al. (US 2015/0281507; hereinafter “Konen”).
Regarding claim 8, Kim does not disclose wherein, as an image obtained from a first camera among the plurality of cameras has a circular or elliptical frame, first input data transmitted from the first camera includes a format in which the image obtained from the first camera is disposed in a quadrangular frame; wherein the first input data includes one or more of camera coordinate information, object identification information, information about a distance between the detected object and the camera, and a time code as additional information; and wherein the additional information is disposed in an area between the image obtained from the first camera and the quadrangular frame.
In the same art of image transmission, Konen teaches wherein, as an image obtained from a first camera among the plurality of cameras has a circular or elliptical frame, first input data transmitted from the first camera includes a format in which the image obtained from the first camera is disposed in a quadrangular frame (“the camera module inside the wide-angle imager is a wide-angle lens producing a non-symmetrical camera module scene image content which is exactly reproduced to the scene image content 512 [of Fig. 5A]. With this kind of camera module inside the wide-angle imager, there are black corners 516 in the image,” para. 69; Fig. 5A illustrates an elliptical image inside a quadrangular frame); wherein the first input data includes one or more of camera coordinate information, object identification information, information about a distance between the detected object and the camera, and a time code as additional information (“parameters associated with the image, including, but in no way limited to, camera module identification, preferred processed image point of view coordinates (e.g., Pan, Tilt and Zoom), copyright data, pedestrian detection, tracking and recognition, face detection,” para. 41); and wherein the additional information is disposed in an area between the image obtained from the first camera and the quadrangular frame (“there are black corners 516 [of Fig. 5A] in the image and the markers 514 are usually added there to make sure the scene image content 512 is not altered,” para. 69; “outputs a marked image 115 [of Fig. 1] that includes, inside the image frame, a combination of both the scene image content 120 from the wide-angle image captured 110 and a marker 130 in which all of the imager parameters associated with the image is encoded,” para. 44).
Before the effective filing date of the claimed invention, it would have been obvious to one having ordinary skill in the art to apply the teachings of Konen to Kim. The motivation would have been “to define different user experiences or system behaviors automatically by instructing what and how to output depending on the specific parameters” (Konen, para. 10) and “it is without consequence on the scene image content to place the encoded markers in the corners” (Konen, para. 16).
Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over Kim in view of Tuukkanen (US 2013/0191507), and further in view of Konen.
Regarding claim 9, Kim does not disclose wherein, as an image obtained from a second camera among the plurality of cameras has a quadrangular frame, second input data transmitted from the second camera includes a format in which the image obtained from the second camera is disposed in a circular or elliptical frame.
In the same art of image transmission, Tuukkanen teaches wherein, as an image obtained from a second camera among the plurality of cameras has a quadrangular frame, second input data transmitted from the second camera includes a format in which the image obtained from the second camera is disposed in a circular or elliptical frame (“The composite image 906 [of Fig. 9] is in round format, for example, the boundary of the composite image 906 is round,” para. 72; Fig. 9 illustrates rectangular camera images contained within a circular frame 906).
Before the effective filing date of the claimed invention, it would have been obvious to one having ordinary skill in the art to apply the teachings of Tuukkanen to Kim. The motivation would have been to provide additional flexibility and compatibility by allowing for different image frame shapes.
The combination of Kim and Tuukkanen does not disclose wherein the second input data includes one or more of camera coordinate information, object identification information, information about a distance between the detected object and the camera, and a time code as additional information; and wherein the additional information is disposed in an area between the image obtained from the second camera and the circular or elliptical frame.
In the same art of image transmission, Konen teaches wherein the second input data includes one or more of camera coordinate information, object identification information, information about a distance between the detected object and the camera, and a time code as additional information (“parameters associated with the image, including, but in no way limited to, camera module identification, preferred processed image point of view coordinates (e.g., Pan, Tilt and Zoom), copyright data, pedestrian detection, tracking and recognition, face detection,” para. 41); and wherein the additional information is disposed in an area between the image obtained from the second camera and the … frame (“there are black corners 516 [of Fig. 5A] in the image and the markers 514 are usually added there to make sure the scene image content 512 is not altered,” para. 69; “outputs a marked image 115 [of Fig. 1] that includes, inside the image frame, a combination of both the scene image content 120 from the wide-angle image captured 110 and a marker 130 in which all of the imager parameters associated with the image is encoded,” para. 44).
Before the effective filing date of the claimed invention, it would have been obvious to one having ordinary skill in the art to apply the teachings of Konen to the circular or elliptical frame of the combination Kim and Tuukkanen. The motivation would have been “to define different user experiences or system behaviors automatically by instructing what and how to output depending on the specific parameters” (Konen, para. 10). Note that while Konen does not specifically describe placing the marker between a quadrangular sensor image and an elliptical frame, Konen is considered to render obvious placing the marker between any sensor image and any frame in the unused area, and when those teachings are applied to the combination of Kim and Tuukkanen having quadrangular sensor images and a circular frame, the resulting combination would render obvious placing the marker between the quadrangular sensor image and the circular frame.
Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over Kim in view of Calisa (US 2006/0195876).
Regarding claim 10, Kim does not disclose wherein third input data transmitted from a third camera among the plurality of cameras includes image data obtained from the third camera and additional information; wherein the additional information includes one or more of camera coordinate information, object identification information, information about a distance between the detected object and the camera, and a time code; and wherein the additional information is included in an optional field of a packet header of video streaming data.
In the same art of video surveillance, Calisa teaches wherein third input data transmitted from a third camera among the plurality of cameras includes image data obtained from the third camera and additional information; wherein the additional information includes one or more of camera coordinate information, object identification information, information about a distance between the detected object and the camera, and a time code; and wherein the additional information is included in an optional field of a packet header of video streaming data (“FIG. 5A shows the video data file 2201 which, in respect of the video image record 2207, stores the JPEG header 2202 containing at least pan, tilt and zoom (PTZ) settings of the camera when the video image data 2208 was captured, and also the video image data 2208 for the video image record 2207. The video frames captured from the network camera 103 and accessed through the Storage Server Application 105 contain additional information at least about the camera control state, including information about camera position expressed as the pan, tilt and zoom settings of the camera when the image was captured,” para. 117).
Before the effective filing date of the claimed invention, it would have been obvious to one having ordinary skill in the art to apply the teachings of Calisa to Kim. The motivation would have been that “recorded surveillance data can be rapidly and conveniently viewed by the user” (Calisa, para. 4).
Claim 11 is rejected under 35 U.S.C. 103 as being unpatentable over Kim in view of Koto et al. (US 2004/0101137; hereinafter “Koto”).
Regarding claim 11, Kim does not disclose wherein the input data transmitted from one or more cameras among the plurality of cameras includes scrambled data and additional information; wherein the scrambled data is data obtained by scrambling image frames captured by the cameras; and wherein the additional information includes one or more of a camera installation purpose, an original authentication code, and a descrambling code.
In the same art of transmitting video, Koto teaches wherein the input data transmitted from one or more cameras among the plurality of cameras includes scrambled data and additional information; wherein the scrambled data is data obtained by scrambling image frames captured by the cameras (“a video signal scrambled by the scramble unit,” para. 40); and wherein the additional information includes one or more of a camera installation purpose, an original authentication code, and a descrambling code (“multiplexing the descramble key on … a video signal scrambled by the scramble unit,” para. 40).
Before the effective filing date of the claimed invention, it would have been obvious to one having ordinary skill in the art to apply the teachings of Koto to Kim. The motivation would have been “to prevent unauthorized duplication and unauthorized access for … video information” (Koto, para. 2).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Ryan McCulley whose telephone number is (571)270-3754. The examiner can normally be reached Monday through Friday, 8:00am - 4:30pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kee Tung can be reached at (571) 272-7794. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/RYAN MCCULLEY/Primary Examiner, Art Unit 2611